Entries |
Document | Title | Date |
20080209422 | Deadlock avoidance mechanism in multi-threaded applications - A computer-implemented method for implementing a deadlock avoidance mechanism to prevent a plurality of threads from deadlocking in a computer system wherein a first thread of the plurality of threads request for a first resource is provided. The computer-implemented method includes employing the deadlock avoidance mechanism to intercept the request. The computer-implemented method also includes examining a status of the first resource. The computer-implemented method further includes, if the first resource is owned, identifying an owner of the first resource, analyzing the owner of the first resource to determine if the owner of the first resource is requesting a second resource, and analyzing the second resource to determine if the second resource is owned by the first thread. The computer-implemented method yet also includes, if the first thread owns the second resource, preventing deadlocking by handling a potential deadlock situation. | 08-28-2008 |
20080209423 | JOB MANAGEMENT DEVICE, CLUSTER SYSTEM, AND COMPUTER-READABLE MEDIUM STORING JOB MANAGEMENT PROGRAM - In a job management device: a request reception unit stores job-input information in a storage device on receipt of a job-execution request; and an execution instruction unit sends to one or more job-assigned calculation nodes a job-execution instruction together with execution-resource information, and stores job-assignment information in the storage device in association with a job identifier. When the contents of the job database are lost by a restart of the job management device, a reconstruction unit collects the job-input information and the job-assignment information from the storage device, collects the execution-resource information from the one or more job-assigned calculation nodes, and reconstructs the job information in the job database. | 08-28-2008 |
20080209424 | IRP HANDLING - An apparatus for handling IRPs, the apparatus comprising an overload determining unit ( | 08-28-2008 |
20080209425 | Device Comprising a Communications Stick With A Scheduler - A scheduler is used to schedule execution of tasks by ‘engines’ that perform high resource functions as requested by ‘executive’ control code, the scheduler using its knowledge of the likelihood of engine request state transitions. The likelihood of engine request state transitions describes the likely sequence of engines which executives will impose: the scheduler can at run-time in effect, as the start of a time slice, look-forward in time to discern a number of possible schedules (i.e. sequence of future engines), assess the merits of each possible schedule using pre-defined parameters (e.g. memory and power utilisation), then apply the schedule which is most appropriate given those parameters. The process repeats at the start of the next time slice. The scheduler therefore operates as a predictive scheduler. The present invention is particularly effective in addressing the ‘multi-mode problem”: dynamically balancing the requirements of multiple communications stacks operating concurrently. | 08-28-2008 |
20080216077 | SOFTWARE SEQUENCER FOR INTEGRATED SUBSTRATE PROCESSING SYSTEM - Embodiments of the invention generally provide apparatus and method for scheduling a process sequence to achieve maximum throughput and process consistency in a cluster tool having a set of constraints. One embodiment of the present invention provides a method for scheduling a process sequence comprising determining an initial individual schedule by assigning resources to perform the process sequence, calculating a fundamental period, detecting resource conflicts in a schedule generated from the individual schedule and the fundamental period, and adjusting the individual schedule to remove the resource conflicts. | 09-04-2008 |
20080216078 | Request scheduling method, request scheduling apparatus, and request scheduling program in hierarchical storage management system - A request scheduling for scheduling requests to a secondary recording media while minimizing the frequency of recording-medium mounting/removing events in a secondary storage unit of an HSM (hierarchical storage management) system by searching for one or more request(s) processed or to be executed as executable on a drive unit, in units of the drive unit. According to the searching, detecting one or more generated read request(s) to read data from a recording medium mounted on the drive unit, and setting the drive unit as an exclusive drive for the read request(s). And scheduling a drive unit having an elapsed time period for a mounted recording media not exceeding a predetermined time period to execute an executable request by priority. | 09-04-2008 |
20080216079 | MANAGING A RESOURCE LOCK - A method of operating a resource lock for controlling access to a resource by a plurality of resource requesters, the resource lock operating in a contention efficient (heavyweight) operating mode, and the method being responsive to a request from a resource requester to acquire the resource lock, the method comprising the steps of: incrementing a count of a total number of acquisitions of the resource lock in the contention efficient operating mode; in response to a determination that access to the resource is not contended by more than one resource requester, performing the steps of: a) incrementing a count of a number of uncontended acquisitions of the resource lock in the contention efficient operating mode; b) calculating a contention rate as the number of uncontended acquisitions in the contention efficient operating mode divided by the total number of acquisitions in the contention efficient operating mode; and c) in response to a determination that the contention rate meets a threshold contention rate, causing the resource lock to change to a non-contention efficient (lightweight) operating mode. | 09-04-2008 |
20080216080 | Method and system to alleviate denial-of-service conditions on a server - A method is presented for processing data in a multithreaded application to alleviate impaired or substandard performance conditions. Work items that are pending processing by the multithreaded application are placed into a data structure. The work items are processed by a plurality of threads within the multithreaded application in accordance with a first algorithm, e.g., first-in first-out (FIFO). A thread within the multithreaded application is configured apart from the plurality of threads such that it processes work items in accordance with a second algorithm that differs from the first algorithm, thereby avoiding the impairing condition. For example, the thread may process a pending work item only if it has a particular characteristic. The thread restricts its own processing of work items by intermittently evaluating workflow conditions for the plurality of threads; if the workflow conditions improve or are unimpaired, then the thread does not process any work items. | 09-04-2008 |
20080229310 | Processor instruction set - The invention provides a processor comprising: an execution unit, and a thread scheduler configured to schedule a plurality of threads for execution by the execution unit in dependence on a respective runnable status for each thread. The execution unit is configured to execute thread scheduling instructions which manage the runnable statuses. The thread scheduling instructions including at least: one or more source event enable instructions each of which sets an event source to a mode in which it generates an event dependent on activity occurring at that source, and a wait instruction which sets one of said runnable statuses to suspended pending one of the events upon which continued execution of the respective thread depends. The continued execution comprises retrieval of a continuation point vector for the respective thread. | 09-18-2008 |
20080229311 | Interface processor - The invention provides a processor comprising a first port operable to generate a first indication dependent on a first activity at the first port, and a second port operable to generate a second indication dependent on a second activity at the second port. The processor also comprises an execution unit arranged to execute multiple threads; and a thread scheduler connected to receive the indications and arranged to schedule the multiple threads for execution by the execution unit based on those indications. The scheduling includes suspending the execution of a thread until receipt of the respective ready signal. The first activity and the second activity are each associated with respective corresponding threads. | 09-18-2008 |
20080229312 | Processor register architecture - The invention provides a processor comprising an execution unit for executing multiple threads, each thread comprising a sequence of instructions and each thread being designated to handle activity from at least one specified source. The processor also comprises a thread scheduler for scheduling a plurality of threads to be executed by the execution unit, said scheduling being based on the respective activity handled by the threads; and a plurality of sets of registers connected to the execution unit. Each set of registers is arranged to store information representing a respective one of the plurality of threads, at least a part of the information being accessible by the execution unit for use in executing the respective thread when scheduled. | 09-18-2008 |
20080229313 | Project task management system for managing project schedules over a network - A client-server based project schedule management system comprises multiple editors accessible through a web browser to perform various scheduling tasks by members of a project. Client-executable code is generated by the server for the client, which is passed to the client along with schedule-related information for populating the respective editors. The client executes the server-generated code to display the respective editor with pertinent information populated therein, and to manage and maintain any new or updated information in response to user interactions with the editor. Rows of tasks are represented by corresponding objects, where editor elements are object attributes which are directly accessible by the respective objects. Database queries are generated by the server based on constant strings containing placeholders which are replaced with information used by the query. | 09-18-2008 |
20080229314 | STORAGE MEDIUM CONTAINING BATCH PROCESSING PROGRAM, BATCH PROCESSING METHOD AND BATCH PROCESSING APPARATUS - Batch processing program is performed in a computer. Job steps are executed in a manner that, when the number of job steps is determined by the determining means to exceed the maximum number of processes, successive job steps defined as pipe processing objects are divided in units of a maximum number of job steps corresponding to the maximum number of processes. A pipe is used for data transfer between respective job steps within a same segment divided, and a temporary file is used for data transfer between each set of adjacent job steps each belonging to a different segment. | 09-18-2008 |
20080229315 | DISTRIBUTED PROCESSING PROGRAM, SYSTEM, AND METHOD - According to an aspect of an embodiment, a method for controlling of a distributed processing system comprising a management computer for managing distributed processing of a job program and a plurality of execution computers for executing a plurality of jobs, comprising: dividing a request of processing of the job program into a plurality of jobs by the management computer; assigning said jobs from said management computer to said execution computers; transferring processed information obtained by executing said jobs by said execution computers to said management computer; storing said processed information into said execution computer; and resuming dividing a request of processing of the job program and assigning the jobs to said execution computers by management computer, wherein assignment of the jobs is arranged such that at least one of the jobs for which the processed information stored is available is omitted from assignment. | 09-18-2008 |
20080235687 | SUPPLY CAPABILITY ENGINE WEEKLY POLLER - A method for executing and polling an operational slice of a supply capability engine. The method of polling is designed to query a DB2 table searching for a predetermined, eligible operational slice to process. When an operational slice is detected that is ready to be processed, an entry on a queue is placed, typically to a second DB2 table. The operational slices on the queue are then processed sequentially. The poller monitors the duration of the operational slice, and generates an alert if any of the operational slices placed on the queue exceed an allowable duration. | 09-25-2008 |
20080235688 | Enhanced Distance Calculation for Job Route Optimization - Systems and methods provide optimized distribution of jobs for execution among available workers. Categories are established for pairs of jobs based on a precise or estimated distance between each pair of jobs. Values are then assigned to the pairs of jobs and various decisions about job assignment and grouping can be made based upon the assigned value. The systems and methods allow certain job pairs to be excluded from consideration from grouping together, and emphasize which jobs are best suited for pairwise assignment, resulting in reduction of costs and necessary resources. | 09-25-2008 |
20080235689 | Scheduling in a communication server - A method of operating a communication server in handling data from a plurality of channels, which includes receiving data of a plurality of channels, by the communication server, determining, for the channels, target times by which the channels should be handled in order to avoid starvation of the channel, estimating handling times required for processing sessions of the channel and repeatedly selecting, by a scheduler of the communication server, a channel whose data is to be handled responsive to the determined target times and the estimated handling times. In addition, a processor of the communication server is scheduled to perform, without interruption for handling of data of other channels, a processing session on the selected channel, in which the received data is prepared for transmission and placed in an output buffer and at least one driver of the communication server transmits the data prepared for transmission independently of the processor of the communication server. | 09-25-2008 |
20080235690 | Maintaining Processing Order While Permitting Parallelism - A system and method for maintaining processing order while permitting parallelism. Processing of a piece of work is divided into a plurality of stages. At each stage, a task advancing the work towards completion is performed. By performing processing as a sequence of tasks, processing can be done in parallel, with progress being made simultaneously on different pieces of work in different stages by a plurality of threads of execution. | 09-25-2008 |
20080235691 | SYSTEM AND METHOD OF STREAM PROCESSING WORKFLOW COMPOSITION USING AUTOMATIC PLANNING - An automatic planning system is provided for stream processing workflow composition. End users provide requests to the automatic planning system. The requests are goal-based problems to be solved by the automatic planning system, which then generates plan graphs to form stream processing applications. A scheduler deploys and schedules the stream processing applications for execution within an operating environment. The operating environment then returns the results to the end users. | 09-25-2008 |
20080235692 | APPARATUS AND DATA STRUCTURE FOR AUTOMATIC WORKFLOW COMPOSITION - A stream processing system provides a description language for stream processing workflow composition. A domain definition data structure in the description language defines all stream processing components available to the stream processing system. Responsive to receiving a stream processing request, a planner translates the stream processing request into a problem definition. The problem definition defines stream properties that must be satisfied by property values associated with one or more output streams. The planner generates a workflow that satisfies the problem definition given the domain definition data structure. | 09-25-2008 |
20080244584 | TASK SCHEDULING METHOD - Provided is a method for scheduling activities. The method includes partitioning tasks provided for scheduling. The partitioning is accomplished by receiving at least one task including at least one data type. The data type is reviewed to determine at least one scheduling criteria and the task is routed to a queue based on the determined scheduling criteria. Each queue also has at least one queue characteristic. The method also includes scheduling the partitioned tasks. The scheduling is accomplished by retrieving the at least one task from the queue in response to a trigger. The retrieved task is routed to at least one scheduler. In a first instance the routing is based on the queue characteristic. In a second instance the routing is based on at least one scheduler characteristic. A scheduling system for performing this method is also provided. | 10-02-2008 |
20080244585 | SYSTEM AND METHOD FOR USING FAILURE CASTING TO MANAGE FAILURES IN COMPUTER SYSTEMS - A system and method for using failure casting to manage failures in computer system. In accordance with an embodiment, the system uses a failure casting hierarchy to cast failures of one type into failures of another type. In doing this, the system allows incidents, problems, or failures to be cast into a (typically smaller) set of failures, which the system knows how to handle. In accordance with a particular embodiment, failures can be cast into a category that is considered reboot-curable. If a failure is reboot-curable then rebooting the system will likely cure the problem. Examples include hardware failures, and reboot-specific methods that can be applied to disk failures and to failures within clusters of databases. The system can even be used to handle failures that were hitherto unforeseen—failures can be cast into known failures based on the failure symptoms, rather than any underlying cause. | 10-02-2008 |
20080244586 | DIRECTED SAX PARSER FOR XML DOCUMENTS - A method for processing XML documents using a SAX parser, implemented in a two-thread architecture having a main thread and a parsing thread. The parsing procedure is located in a parsing thread, which implements callback functions of a SAX parser and creates and executes the SAX parser. The main thread controls the parsing thread by sending target content to be searched for and wakeup signals to the parsing thread, and receives the content found by the parsing thread for further processing. In the parsing thread, each time a callback function is invoked by the SAX parser, it is determined whether the target content has been found. If it has, the parsing thread sends the found content to the main thread with a wakeup signal, and enters a sleep mode, whereby further parsing is halted until a wakeup signal with additional target content is received from the main thread. | 10-02-2008 |
20080244587 | Thread scheduling on multiprocessor systems - A thread scheduler may be used in a chip multiprocessor or symmetric multiprocessor system to schedule threads to processors. The scheduler may determine the bandwidth utilization of the two threads in combination and whether that utilization exceeds the threshold value. If so, the threads may be scheduled on different processor clusters that do not have the same paths between the common memory and the processors. If not, then the threads may be allocated on the same processor cluster that shares cache among processors. | 10-02-2008 |
20080244588 | Computing the processor desires of jobs in an adaptively parallel scheduling environment - The present invention describes a system and method for scheduling jobs on a multiprocessor system. The invention includes schedulers for use in both work-sharing and work-stealing environments. Each system utilizes a task scheduler using historical usage information, in conjunction with a job scheduler to achieve its results. In one embodiment, the task scheduler measures the time spent on various activities, in conjunction with its previous processor allocation or previous desire, to determine an indication of its current processor desire. In another embodiment of the present invention, the task scheduler measures the resources used by the job on various activities. Based on these measurements, the task scheduler determines the efficiency of the job and an indication of its current processor desire. In another embodiment, the task scheduler measures the resources consumed executing the job and determines its efficiency and an indication of its current processor desire. | 10-02-2008 |
20080244589 | Task manager - A task list contains information related to multiple tasks to be executed in a sequential manner. A task processor is provided to execute at least one task in the task list. A task management engine retrieves information from the task list and provides task execution instructions to the task processor. The task execution instructions provided by the task management engine are based on information retrieved from the task list. The task management engine receives execution results from the task processor and provides those results to a calling program that communicates with the task management engine. | 10-02-2008 |
20080244590 | METHOD FOR IMPROVING PERFORMANCE IN A COMPUTER STORAGE SYSTEM BY REGULATING RESOURCE REQUESTS FROM CLIENTS - The present invention discloses a method, apparatus and program storage device for providing non-blocking, minimum threaded two-way messaging. A Performance Monitor Daemon provides one non-blocked thread pair per processor to support a large number of connections. The thread pair includes an outbound thread for outbound communication and an inbound thread for inbound communication. The outbound thread and the inbound thread operate asynchronously. | 10-02-2008 |
20080244591 | INFORMATION PROCESSING SYSTEM AND STORAGE MEDIUM - An information processing system has a file memory, a schedule information memory, a reminder information memory that stores reminder information including identification information of a user, a registration deadline of the first electronic file, and a reminder submission time in connection with information indicating a registration location of the first electronic file in the file memory, a setting unit that specifies, upon arrival of the reminder submission time, a schedule item for reminding the user of the task in schedule information of the user stored in the schedule information memory as an item scheduled for the registration deadline or schedule for a day prior to the registration deadline, and a display data outputting unit that outputs, upon receipt of a request for displaying the schedule information, schedule information display data in which display information corresponding to the schedule item is associated with information on a link to the registration location. | 10-02-2008 |
20080250412 | Cooperative process-wide synchronization - One embodiment relates to a computer-implemented method of concurrently performing a process-wide operation in a multi-threaded process being executed on a computer system so as to result in more efficient performance of the computer system. A plurality of threads of the process concurrently participate in the process-wide operation. Finishing steps of the process-wide operation are performed by a last thread participating in the process-wide operation, regardless of whether the last thread is an initiator thread or a target thread. Other embodiments, aspects, and features are also disclosed. | 10-09-2008 |
20080250413 | Method and Apparatus for Managing Tasks - The method of managing a task provided by the present invention includes the steps of decomposing said task into at least two sub-tasks; assigning said at least two sub-tasks to at least two function modules, so that said at least two function modules respectively complete said at least two sub-tasks, wherein said at least two function modules respectively belong to at least two different equipments. By means of the present invention, a virtual equipment can be constructed more flexibly to complete specific tasks, thus not only the resources of the equipments can be made use of more effectively, but also the user's requirements at different situations can be met. | 10-09-2008 |
20080250414 | Dynamically Partitioning Processing Across A Plurality of Heterogeneous Processors - A program is into at least two object files: one object file for each of the supported processor environments. During compilation, code characteristics, such as data locality, computational intensity, and data parallelism, are analyzed and recorded in the object file. During run time, the code characteristics are combined with runtime considerations, such as the current load on the processors and the size of the data being processed, to arrive at an overall value. The overall value is then used to determine which of the processors will be assigned the task. The values are assigned based on the characteristics of the various processors. For example, if one processor is better at handling intensive computations against large streams of data, programs that are highly computationally intensive and process large quantities of data are weighted in favor of that processor. The corresponding object is then loaded and executed on the assigned processor. | 10-09-2008 |
20080256542 | Processor - In a processor including a plurality of register groups, while a task is being executed using one of the register groups, a context of a task to be executed next is restored into another one of the register groups. If the execution of the task currently being executed is suspended before the restoration starts, the task execution is continued by using one of the register groups in which a context of a task executed previously remains and executing the task. | 10-16-2008 |
20080256543 | Replicated State Machine - A replicated state machine includes multiple state machine replicas. In response to a request from a client, the state machine replicas can execute a service for the request in parallel. Each of the state machine replicas is provided with a request manager instance. The request manager instance includes a distributed consensus means and a selection means. The distributed consensus means commits a stimulus sequence of requests to be processed by each of the state machine replicas. The selection means selects requests to be committed to the stimulus sequence. The selection is based on an estimated service time of the request from the client. The estimated service time of the request from the client is based on a history of service times from the client provided by a feedback from the state machine replicas. As such, requests from multiple clients are serviced fairly. | 10-16-2008 |
20080263550 | A SYSTEM AND METHOD FOR SCHEDULED DISTRIBUTION OF UPDATED DOCUMENTS - The subject application is directed to a system and method for scheduled distribution of updated documents. Document data corresponding to at least one electronic document associated with a meeting is first stored in an associated data storage. Next, identification data representing each invitee to the meeting is stored in the storage. Event data corresponding to the scheduled timing of the meeting event is then stored in the associated data storage. Document processing operation data, corresponding to one or more document processing operations to be performed on the received document data, is also stored in the associated data storage. The stored document data is then retrieved from the data storage at an appointed time in accordance with the stored event data. At least one of the associated document processing operations is then commenced on the retrieved document data based upon the stored document processing operation data. | 10-23-2008 |
20080263551 | OPTIMIZATION AND UTILIZATION OF MEDIA RESOURCES - Method for scheduling a new backup job within a backup application to optimize a utilization of a media resource of said backup application. The backup application includes one or more previously scheduled backup jobs. The backup application calculates a current load of the media resource as a function of the previously scheduled backup jobs and the media resource and predicts a load value for the new backup job as a function of job parameters associated with the new backup job. Then, the backup application schedules the new backup job as a function of the calculated current load and the predicted load value such that the resulting load on the media resource will yeild a minimum peak percentage utilization of the media resource. Alternatively, the backup application schedules the new backup job and previously scheduled backup jobs as function of the calculated current load and the predicted load value such that the resulting load on the media resource will yield a minimum peak percentage utilization of the media resource. | 10-23-2008 |
20080263552 | MULTITHREAD PROCESSOR AND METHOD OF SYNCHRONIZATION OPERATIONS AMONG THREADS TO BE USED IN SAME - The Thread Data Base | 10-23-2008 |
20080263553 | Dynamic Service Level Manager for Image Pools - An embodiment of the present invention relates to the field of computer technology, in particular it relates to a method for provisioning images for virtual machines, wherein for a predefined application type a pool of at least one image of a virtual machine performing said application is loaded in the main memory of the computer. | 10-23-2008 |
20080263554 | Method and System for Scheduling User-Level I/O Threads - The present invention is directed to a user-level thread scheduler that employs a service that propagates at the user level, continuously as it gets updated in the kernel, the kernel-level state necessary to determine if an I/O operation would block or not. In addition, the user-level thread schedulers used systems that propagate at the user level other types of information related to the state and content of active file descriptors. Using this information, the user-level thread package determines when I/O requests can be satisfied without blocking and implements pre-defined scheduling policies. | 10-23-2008 |
20080271025 | SYSTEM AND METHOD FOR CREATING AN ASSURANCE SYSTEM IN A PRODUCTION ENVIRONMENT - An assurance system for testing the functionality of a computer system by creating an overlay of the computer system and routing selected traffic to the overlay while assessing the performance of the system. The system may be used for purposes of managing the testing of the computer system and delivery of comprehensive reports of the likely results on the computer system based on results generated by the assurance system, including such things as configuration changes to the environment, environment load and stress conditions, environment security, software installation to the environment, and environment performance levels among other things. | 10-30-2008 |
20080271026 | Systems and Media for Controlling Temperature in a Computer System - Systems and media for controlling temperature of a system are disclosed. More particularly, hardware, software and/or firmware for controlling the temperature of a computer system are disclosed. Embodiments may include receiving component temperatures for a group of components and selecting a component to perform an activity based at least partially on the component temperatures. In one embodiment, the lowest temperature component may be selected to perform the activity. Other embodiments may provide for determining an average temperature of the components, and if the average temperature exceeds a threshold, delaying or reducing the performance of the components. In some embodiments, components may include computer processors, memory modules, hard drives, etc. | 10-30-2008 |
20080276240 | Reordering Data Responses - A system includes a deterministic system, and a controller electrically coupled to the deterministic system via a link, wherein the controller comprises a transaction scheduling mechanism that allows data responses from the deterministic system, corresponding to requests issued from the controller, to be returned out of order. | 11-06-2008 |
20080282246 | COMPILER AIDED TICKET SCHEDULING OF TASKS IN A COMPUTING SYSTEM - A method of scheduling tasks for execution in a computer system includes determining a dynamic worst case execution time for a non-periodic task. The dynamic worst case execution time is based on an actual execution path of the non-periodic task. An available time period is also determined, wherein the available time period is an amount of time available for execution of the non-periodic task. The non-periodic task is scheduled for execution if the dynamic worst case execution time is less than the available time period. | 11-13-2008 |
20080282247 | Method and Server for Synchronizing a Plurality of Clients Accessing a Database - The invention relates to a method of synchronizing a plurality of clients accessing a database, each client executing a plurality of tasks on the database, wherein the method comprises for each of the clients the steps of accumulating the time of one or more tasks performed by the client after the issuance of a synchronization request and rejecting a request for the opening of a new task of the client, if the accumulated task time exceeds a maximum accumulated task time. | 11-13-2008 |
20080282248 | Electronic computing device capable of specifying execution time of task, and program therefor - When an execution time of a task is short the execution time of the task can be reliably specified and erroneous calculation of an execution time due to other processing can be prevented. A task designated in advance and a task whose execution is initiated are collated with each other. When the tasks are compared with each other, the initiation time of the executing task is recorded at the initiation of the executing task and the difference between the termination time and the initiation time is recorded as an execution time when the executing task terminates. | 11-13-2008 |
20080282249 | METHOD AND SYSTEM FOR PERFORMING REAL-TIME OPERATION - An information processing system performs a real-time operation including a combination of a plurality of tasks. The system includes a plurality of processors, a unit which stores structural description information and a plurality of programs describing procedures corresponding to the tasks, the structural description information indicating a relationship in input/output between the programs and including cost information concerning time required for executing each of the programs, a unit which determines an execution start timing and execution term of each of a plurality of threads for execution of the programs based on the structural description information, and a unit which performs a scheduling operation of assigning the threads to at least one of the processors according to a result of the determining. | 11-13-2008 |
20080282250 | COMPONENT INTEGRATOR - Techniques allow for communication with and management of multiple external components. A component manager communicates with one or more component adapters. Each component adapter communicates with an external component and is able to call the methods, functions, procedures, and other operations of the external component. The component manager associates these external operations with local operations, such that an application may use local operation names to invoke the external operations. Furthermore, the component manager has component definitions and operation definitions that describe the component adapters and operations, including input and output parameters and the like. The component manager is able to receive a group of data including a local operation and a list of input and output parameters and determine from the foregoing information which external operation to call and which component adapter has access to the external operation. | 11-13-2008 |
20080295099 | Disk Drive for Handling Conflicting Deadlines and Methods Thereof | 11-27-2008 |
20080295100 | SYSTEM AND METHOD FOR DIAGNOSING AND MANAGING INFORMATION TECHNOLOGY RESOURCES | 11-27-2008 |
20080295101 | ELECTRONIC DOCUMENT MANAGER | 11-27-2008 |
20080295102 | COMPUTING SYSTEM, METHOD OF CONTROLLING THE SAME, AND SYSTEM MANAGEMENT UNIT | 11-27-2008 |
20080295103 | DISTRIBUTED PROCESSING METHOD | 11-27-2008 |
20080301683 | Performing an Allreduce Operation Using Shared Memory - Methods, apparatus, and products are disclosed for performing an allreduce operation using shared memory that include: receiving, by at least one of a plurality of processing cores on a compute node, an instruction to perform an allreduce operation; establishing, by the core that received the instruction, a job status object for specifying a plurality of shared memory allreduce work units, the plurality of shared memory allreduce work units together performing the allreduce operation on the compute node; determining, by an available core on the compute node, a next shared memory allreduce work unit in the job status object; and performing, by that available core on the compute node, that next shared memory allreduce work unit. | 12-04-2008 |
20080301684 | Multiple instance management for workflow process models - A first instance and a second instance of an activity of a process model may be executed, the first instance, the second instance, and the activity being associated with activity state data describing one or more states thereof. A co-process associated with the first instance, the second instance, and the activity may be spawned, and the co-process may be executed based on the activity state data. | 12-04-2008 |
20080301685 | Identity-aware scheduler service - In a computing environment, clients and scheduling services are arranged to coordinate time-based services. Representatively, the client and scheduler engage in an http session whereby the client creates an account (if the first usage) indicating various identities and rights of the client for use with a scheduling job. Thereafter, one or more scheduling jobs are registered including an indication of what payloads are needed, where needed and when needed. Upon appropriate timing, the payloads are delivered to the proper locations, but the scheduling of events is no longer entwined with underlying applications in need of scheduled events. Monitoring of jobs is also possible as is establishment of appropriate communication channels between the parties. Noticing, encryption, and authentication are still other aspects as are launching third party services before payload delivery. Still other embodiments contemplate publishing an API or other particulars so the service can be used in mash-up applications. | 12-04-2008 |
20080301686 | METHOD AND APPARATUS FOR EXTENDING OPERATIONS OF AN APPLICATION IN A DATA PROCESSING SYSTEM - A method, an apparatus, and computer instructions are provided for extending operations of an application in a data processing system. A primary operation is executed. All extended operations of the primary operation are cached and pre and post operation identifiers are identified. For each pre operation identifier, a pre operation instance is created and executed. For each post operation identifier, a post operation instance is created and executed. | 12-04-2008 |
20080307419 | Lazy kernel thread binding - Various technologies and techniques are disclosed for providing lazy kernel thread binding. User mode and kernel mode portions of thread scheduling are decoupled so that a particular user mode thread can be run on any one of multiple kernel mode threads. A dedicated backing thread is used whenever a user mode thread wants to perform an operation that could affect the kernel mode thread, such as a system call. For example, a notice is received that a particular user mode thread running on a particular kernel mode thread wants to make a system call. A dedicated backing thread that has been assigned to the particular user mode thread is woken. State is shuffled from the user mode thread to the dedicated backing thread using a state shuffling process. The particular kernel mode thread is put to sleep. The system call is executed using the dedicated backing thread. | 12-11-2008 |
20080307420 | Scheduler Supporting Web Service Invocation - The present invention proposes a method and a corresponding system for scheduling invocation of web services from a central point of control. A scheduler accesses a workload database, which associates an execution agent and a descriptor with each submitted job. The descriptor identifies a desired web service, an address of a corresponding WSDL document, and the actual content of a request message to be passed to the web service. Whenever the job is submitted for execution, the scheduler sends the job's descriptor to the associated agent. In response thereto, the agent downloads the WSDL document that specifies the structure of the messages supported by the web service. The scheduler builds a request message for the web service embedding the desired content into the structure specified in the WSDL document. The agent sends the request message to an endpoint implementing the web service, so as to cause its invocation. | 12-11-2008 |
20080307421 | FLOW PROCESS EXECUTION METHOD, APPARATUS AND PROGRAM - A flow process executing apparatus receives an instruction specifying a position in a first flow process description document. When the process reaches the specified position during execution of a flow process in accordance with the first flow process description document, the flow process executing apparatus stops the flow process in accordance with the first flow process description document, and resumes the stopped flow process in accordance with a second flow process description document. | 12-11-2008 |
20080307422 | SHARED MEMORY FOR MULTI-CORE PROCESSORS - A shared memory for multi-core processors. Network components configured for operation in a multi-core processor include an integrated memory that is suitable for, e.g., use as a shared on-chip memory. The network component also includes control logic that allows access to the memory from more than one processor core. Typical network components provided in various embodiments of the present invention include routers and switches. | 12-11-2008 |
20080307423 | Schedule Based Cache/Memory Power Minimization Technique - A system includes a task scheduler ( | 12-11-2008 |
20080307424 | Scheduling Method For Polling Device Data - A dispatching method for polling device data. The method comprises: sorting managed devices according to their types, sorting various types of data of each device so as to form different modules, and assigning a priority attribute and a polling period attribute to each module; dividing the managed devices into two sets: one set consisting of devices to be polled and the other set consisting of devices whose connection states need to be detected; and polling each module in the set consisting of devices to be polled according to its priority and polling period periodically. Different polling periods can be set and different polling policies can be applied according to data changeability. Polling policies can be changed in real time and flexibly based on the condition of devices. | 12-11-2008 |
20080313635 | JOB ALLOCATION METHOD FOR DOCUMENT PRODUCTION - Methods and systems of processing print jobs are disclosed. A feasible route for processing each of a plurality of jobs is determined- For each feasible route, the time to process the job via the feasible route is determined. Each job is assigned to a first feasible route. A first objective function value is determined using a time to process each job assigned to each autonomous cell. A job is selected. A second feasible route is selected for the selected job. A second objective function value is determined by substituting the second feasible route for the first feasible route for the selected job. If the first value plus a threshold exceeds the second value, the second value replaces the first value, and the second feasible route replaces the first feasible route. Selection and substitution are repeated for each job. The jobs are then processed. | 12-18-2008 |
20080313636 | SYSTEM AND METHOD FOR SECURE AUTOMATED DATA COLLECTION - The invention provides for an automated data collection having an endpoint coupled to at least one gaming machine to collect data from the at least one gaming machine, at least one concentrator in communication with the endpoint via a personal area network to obtain the data from the endpoint, and at least one remote collection server in communication with the at least one concentrator to receive the data from the at least one concentrator, wherein the data is pushed from the endpoint to the at least one remote collection server at predefined time intervals without interrupting game play on the at least one gaming machine. | 12-18-2008 |
20080313637 | PREDICTION-BASED DYNAMIC THREAD POOL MANAGEMENT METHOD AND AGENT PLATFORM USING THE SAME - The present invention relates to a prediction-based dynamic thread pool management method and an agent platform using the same. An prediction-based dynamic thread pool management method according to the present invention includes: (a) calculating a thread variation to a variation of the number of threads at a time t | 12-18-2008 |
20080320477 | METHOD FOR SCHEDULING AND CUSTOMIZING SURVEILLANCE TASKS WITH WEB-BASED USER INTERFACE - A customized surveillance task management system is implemented to intelligently schedule tasks for the user. Via an internet connection a user accesses a list of surveillances to be accomplished. The schedule of surveillances is created from information initially loaded into a centralized database that is subsequently analyzed by the schedule engine and written back into the database. Following execution of these surveillances the user again accesses the system to input data acquired. The user then inputs data to the database via the internet interface via preset database fields rendered to the client machine. This data is again analyzed by the scheduling engine and, with the help the scheduling, criticality, random sampling, and surveillance method assistants, provides the user with an updated schedule list and best set of surveillance methods dependent upon pass/fail rates and criticality of failures. | 12-25-2008 |
20080320478 | AGE MATRIX FOR QUEUE DISPATCH ORDER - An apparatus for queue allocation. An embodiment of the apparatus includes a dispatch order data structure, a bit vector, and a queue controller. The dispatch order data structure corresponds to a queue. The dispatch order data structure stores a plurality of dispatch indicators associated with a plurality of pairs of entries of the queue to indicate a write order of the entries in the queue. The bit vector stores a plurality of mask values corresponding to the dispatch indicators of the dispatch order data structure. The queue controller interfaces with the queue and the dispatch order data structure. The queue controller excludes at least some of the entries from a queue operation based on the mask values of the bit vector. | 12-25-2008 |
20080320479 | TELECOM ADAPTER LAYER SYSTEM, AND METHOD AND APPARATUS FOR ACQUIRING NETWORK ELEMENT INFORMATION - A Telecom Adapter Layer (TAL) system includes a management unit and an execution unit connected via a distributed bus. In order to acquire network element (NE) information, an external service module sends a Get Info request to the execution unit according to the reference of the execution unit, and the execution unit acquires information from an NE according to the request and returns the NE information acquired from the NE to the external service module. The execution unit can be deployed in a device other than the service module or the management unit. The TAL system may be expanded to include more than one management unit and/or execution unit. By acquiring NE information from the execution unit, the TAL system is capable to perform NE management across a firewall. | 12-25-2008 |
20080320480 | SYSTEM FOR DETERMINING ARRAY SEQUENCE OF A PLURALITY OF PROCESSING OPERATIONS - A method and system for determining an array sequence of processing operations to maximize the efficiency of steel plate processing. Between two processing operations, a first sequence constraint based on a first attribute of each processing operation and a second sequence constraint based on a second attribute of each processing operation are defined. A system selects, as a cluster, at least one of processing operations having a common attribute value of the first attribute, and arranged in a sequence satisfying the second sequence constraint. The system regards the first sequence constraint as a sequence constraint between a plurality of clusters, and arranges the plurality of clusters in a sequence maximizing the efficiency of processing. | 12-25-2008 |
20090007119 | METHOD AND APPARATUS FOR SINGLE-STEPPING COHERENCE EVENTS IN A MULTIPROCESSOR SYSTEM UNDER SOFTWARE CONTROL - An apparatus and method are disclosed for single-stepping coherence events in a multiprocessor system under software control in order to monitor the behavior of a memory coherence mechanism. Single-stepping coherence events in a multiprocessor system is made possible by adding one or more step registers. By accessing these step registers, one or more coherence requests are processed by the multiprocessor system. The step registers determine if the snoop unit will operate by proceeding in a normal execution mode, or operate in a single-step mode. | 01-01-2009 |
20090007120 | SYSTEM AND METHOD TO OPTIMIZE OS SCHEDULING DECISIONS FOR POWER SAVINGS BASED ON TEMPORAL CHARACTERISTICS OF THE SCHEDULED ENTITY AND SYSTEM WORKLOAD - In some embodiments, the invention involves a system and method to enhance an operating system's ability to schedule ready threads, specifically to select a logical processor on which to run the ready thread, based on platform policy. Platform policy may be performance-centric, power-centric, or a balance of the two. Embodiments of the present invention use temporal characteristics of the system utilization, or workload, and/or temporal characteristics of the ready thread in choosing a logical processor. Other embodiments are described and claimed. | 01-01-2009 |
20090007121 | Method And Apparatus To Enable Runtime Processor Migration With Operating System Assistance - In a method for switching to a spare processor during runtime, a processing system determines that execution should be migrated off of an active processor. An operating system (OS) scheduler and at least one device are then paused, and the active processor is put into an idle state. State data from writable and substantial non-writable stores in the active processor is loaded into the spare processor. Interrupt routing table logic for the processing system is dynamically reprogrammed to direct external interrupts to the spare processor. The active processor may then be off-lined, and the device and OS scheduler may be unpaused or resumed. Threads may then be dispatched to the spare processor for execution. Other embodiments are described and claimed. | 01-01-2009 |
20090007122 | AUTOMATIC RELEVANCE FILTERING - A computer-implemented method and an apparatus for use in a computing apparatus are disclosed. The method includes determining a context and a data requirement for a candidate action to be selected, the selection specifying an action in a workflow; and filtering the candidate actions for relevance in light of the context and the data requirement. The apparatus, in a first aspect, includes a program storage medium encoded with instructions that, when executed by a computing device, performs the method. In a second aspect, the apparatus includes a computing apparatus programmed to perform the method. | 01-01-2009 |
20090019442 | Changing a Scheduler in a Virtual Machine Monitor - Machine-readable media, methods, and apparatus are described to change a first scheduler in the virtual machine monitor. In some embodiments, a second scheduler is loaded in a virtual machine monitor when the virtual machine monitor is running; and then is activated to handle a scheduling request for a scheduling process in place of the first scheduler, when the virtual machine monitor is running. | 01-15-2009 |
20090019443 | METHOD AND SYSTEM FOR FUNCTION-SPECIFIC TIME-CONFIGURABLE REPLICATION OF DATA MANIPULATING FUNCTIONS - The system ( | 01-15-2009 |
20090019444 | Information processing and control - Information processing apparatus, including occurrence number counter counting events that occurred in each of a plurality of CPUs. Apparatus performs functions of; storing accumulated occurrence number of events, which occurred while the thread is being executed by each of the CPUs, in a thread storage area of the thread associating accumulated occurrence number with CPU; storing, in the thread storage area, a value of occurrence number counter of the CPU, the value having been counted before the thread is resumed by the CPU; and adding, to accumulated occurrence number which has been stored in accumulated number storing unit while corresponding to the CPU, a difference value obtained by subtracting a counter value, which has been stored in the start-time number storing unit of the thread, from a counter value of the occurrence number counter of the CPU, in a case where the CPU terminates an execution of the thread. | 01-15-2009 |
20090024999 | Methods, Systems, and Computer-Readable Media for Providing an Indication of a Schedule Conflict - Methods, systems, and computer-readable media provide for providing an indication of a schedule conflict. According to embodiments, a method for providing an indication of a schedule conflict is provided. According to the method, whether one of a plurality of technicians is scheduled but not dispatched or dispatched but not scheduled is determined. In response to determining that the one of the plurality of technicians is scheduled but not dispatched or dispatched but not scheduled, an indication that the one of the plurality of technicians is scheduled but not dispatched or dispatched but not scheduled is provided. | 01-22-2009 |
20090025000 | METHODS AND SYSTEMS FOR PROCESSING HEAVY-TAILED JOB DISTRIBUTIONS IN A DOCUMENT PRODUCTION ENVIRONMENT - A production printing system for processing a plurality of print jobs may include a plurality of print job processing resources and a computer-readable storage medium including one or more programming instructions for performing a method of processing a plurality of print jobs in a document production environment. The method may include identifying a print job size distribution for a plurality of print jobs in a document production environment and determining whether the print job size distribution exhibits a heavy-tail characteristic. For each print job size distribution that exhibits a heavy-tail characteristic, the plurality of print jobs may be grouped into a plurality of subgroups such that at least one of the plurality of subgroups exhibits a non-heavy-tail characteristic, and each job in the at least one of the plurality of subgroups exhibiting the non-heavy-tail characteristic may be processed by one or more print job processing resources. | 01-22-2009 |
20090025001 | METHODS AND SYSTEMS FOR PROCESSING A SET OF PRINT JOBS IN A PRINT PRODUCTION ENVIRONMENT - A system and method for routing and processing print jobs within a print job set considers the setup characteristics of each print job. Each print job set may be classified as a first job processing speed set, a second job processing speed set, or another job processing speed set based on the corresponding setup characteristics. First job processing speed sets are routed to a first group of print job processing resources, while second job processing speed sets are routed to a second group of print job processing speed resources. Each resource group may include an autonomous cell. | 01-22-2009 |
20090025002 | METHODS AND SYSTEMS FOR ROUTING LARGE, HIGH-VOLUME, HIGH-VARIABILITY PRINT JOBS IN A DOCUMENT PRODUCTION ENVIRONMENT - A system of scheduling a plurality of print jobs in a document production environment may include a plurality of print job processing resources and a computer-readable storage medium including programming instructions for performing a method of processing a plurality of print jobs. The method may include receiving a plurality of print jobs and setup characteristics corresponding to each print job, grouping each print job having a job size that exceeds a job size threshold into a large job subgroup and grouping each print job having a job size that does not exceed the job size threshold into a small job subgroup. The large job subgroup may be classified as a high setup subgroup or a low setup subgroup based on the setup characteristics corresponding to each print job in the large job subgroup. The large job subgroup may be routed to a large job autonomous cell. | 01-22-2009 |
20090025003 | METHODS AND SYSTEMS FOR SCHEDULING JOB SETS IN A PRODUCTION ENVIRONMENT - A system of scheduling a plurality of print jobs in a document production environment may include resources and a computer-readable storage medium including programming instructions for performing a method of processing print jobs. The method may include receiving print jobs and setup characteristics corresponding to each print job. Each print job may have a corresponding job size. The print jobs may be grouped into sets based on a common characteristic and each set may be identified as a fast job set or a slow job set based on setup characteristics associated with the set and the job sizes of the print jobs in the set. The fast job set may be routed to a fast job autonomous cell and the slow job set may be routed to a slow job autonomous cell. | 01-22-2009 |
20090031312 | Method and Apparatus for Scheduling Grid Jobs Using a Dynamic Grid Scheduling Policy - The illustrative embodiments described herein provide a computer-implemented method, apparatus, and computer program product for scheduling grid jobs. In one embodiment, a process identifies information describing available resources on a set of nodes on a heterogeneous grid computing system to form resource availability information. The process identifies a set of static scheduling policies for a set of static schedulers that manage the set of nodes. The process also identifies a static scheduling status for a portion of the set of static schedulers. The process creates a dynamic grid scheduling policy using the resource availability information, the set of static scheduling policies, and the static scheduling status. The process also schedules a set of grid jobs for execution by the available resources using the dynamic grid scheduling policy. | 01-29-2009 |
20090031313 | EXTENSIBLE WEB SERVICES SYSTEM - Techniques for extending a Web services system are provided. One or more Web service applications (WSA) execute on a device. Each WSA provides at least one service. A WSA implements a particular version of a Web Services (WS) specification that is previous to a current version of the WS specification. In one technique, an orchestration module is added that coordinates the interaction between the WSA and one or more extension modules. While processing the request, the WSA calls the orchestration module. The orchestration module, based on one or more attributes of a request, determines whether an extension module, that comprises logic, should be called to process a portion of the request. The logic corresponds to a difference between the previous version and the current version. After an extension module finishes processing the portion of the request, the WSA is caused to further process the request. | 01-29-2009 |
20090031314 | FAIRNESS IN MEMORY SYSTEMS - Architecture for a multi-threaded system that applies fairness to thread memory request scheduling such that access to the shared memory is fair among different threads and applications. A fairness scheduling algorithm provides fair memory access to different threads in multi-core systems, thereby avoiding unfair treatment of individual threads, thread starvation, and performance loss caused by a memory performance hog (MPH) application. The thread slowdown is determined by considering the thread's inherent memory-access characteristics, computed as the ratio of the real latency that the thread experiences and the latency (ideal latency) that the thread would have experienced if it had run as the only thread in the same system. The highest and lowest slowdown values are then used to generate an unfairness parameter which when compared to a threshold value provides a measure of fairness/unfairness currently occurring in the request scheduling process. The architecture provides a balance between fairness and throughput. | 01-29-2009 |
20090031315 | Scheduling Method and Scheduling Apparatus - Thread information is retained in a main memory. The thread information includes a bit string and last executed information. Each bit of the bit string is allocated to a thread, and the number and the value of the bit indicate the number of the thread and whether or not the thread is in an executable state, respectively. The last executed information is the number of a last executed thread. A processor rotates the bit string so that a bit indicating the last executed thread comes to the end of the bit string. It searches the rotated bit string for a bit corresponding to a thread in the executable state in succession from the top, and selects the number of the first obtained bit as the number of the next thread to be executed. Then, the thread information is updated by changing the value of the bit of this number to indicate not being executable, and setting the last executed information to the number of this bit. This operation is performed by using an atomic command. | 01-29-2009 |
20090031316 | Scheduling in a High-Performance Computing (HPC) System - In one embodiment, a method for scheduling in a high-performance computing (HPC) system includes receiving a call from a management engine that manages a cluster of nodes in the HPC system. The call specifies a request including a job for scheduling. The method further includes determining whether the request is spatial, compact, or nonspatial and noncompact. The method further includes, if the request is spatial, generating one or more spatial combinations of nodes in the cluster and selecting one of the spatial combinations that is schedulable. The method further includes, if the request is compact, generating one or more compact combinations of nodes in the cluster and selecting one of the compact combinations that is schedulable. The method further includes, if the request is nonspatial and noncompact, identifying one or more schedulable nodes and generating a nonspatial and noncompact combination of nodes in the cluster. | 01-29-2009 |
20090037916 | PROCESSOR - The present invention provides a processor that cyclically executes a plurality of threads in accordance with an execution time allocated to each of the threads, comprising a reconfigurable integrated circuit. The processor stores circuit configuration information sets respectively corresponding to the plurality of threads, reconfigures a part of the integrated circuit based on the circuit configuration information sets, and sequentially executes each thread using the integrated circuit that has been reconfigured based on one of the configuration information sets that corresponds to the thread. While executing a given thread, the processor selects a thread to be executed next, and reconfigures a part of the integrated circuit where is not currently used for execution of the given thread, based on a circuit configuration information set corresponding to the selected thread. | 02-05-2009 |
20090037917 | Apparatus and method capable of using reconfigurable descriptor in system on chip - An apparatus and method capable of using a reconfigurable descriptor in a System on Chip (SoC) is provided. The apparatus includes: a Central Processing Unit (CPU) for receiving parameters, each of which defines a descriptor, from a user and for providing the parameters to a controller. The controller defines the descriptor by reading target data indicated by the received parameters. | 02-05-2009 |
20090044189 | PARALLELISM-AWARE MEMORY REQUEST SCHEDULING IN SHARED MEMORY CONTROLLERS - Parallelism-aware scheduling of memory requests of threads in shared memory controllers. Parallel scheduling is achieved by prioritizing threads that already have requests being serviced in the memory banks. A first algorithm prioritizes requests of the last-scheduled thread that is currently being serviced. This is accomplished by tracking the thread that generated the last-scheduled request (if the request is still being serviced), and then scheduling another request from the same thread if there is an outstanding ready request from the same thread. A second algorithm prioritizes the requests of all threads that are currently being serviced. This is accomplished by tracking threads that have at least one request currently being serviced in the banks, and assigning the highest priority to these threads in the scheduling decisions. If there are no outstanding requests from any thread having requests that are being serviced, the algorithm defaults back to a baseline scheduling algorithm. | 02-12-2009 |
20090044190 | URGENCY AND TIME WINDOW MANIPULATION TO ACCOMMODATE UNPREDICTABLE MEMORY OPERATIONS - The variable latency associated with flash memory due to background data integrity operations is managed in order to allow the flash memory to be used in isochronous systems. A system processor is notified regularly of the nature and urgency of requests for time to ensure data integrity. Minimal interruptions of system processing are achieved and operation is ensured in the event of a power interruption. | 02-12-2009 |
20090044191 | METHOD AND TERMINAL DEVICE FOR EXECUTING SCHEDULED TASKS AND MANAGEMENT TASKS - An OMA DM-based method for executing a scheduled task, includes the steps of: storing terminal resource capabilities required for executing each scheduled task in a terminal device; and executing the scheduled task after the terminal device determines that the current resource capabilities are sufficient for executing the scheduled task while it is ready to execute the scheduled task. An OMA DM-based terminal device for practicing this method includes a primary storing unit for storing the terminal resource capabilities, a judging unit for determining whether current resource capabilities of the terminal device meet the terminal resource capabilities that are required for executing the scheduled task, and a primary executing unit for executing the scheduled when the judging unit determines that the current resource capabilities are sufficient. By determining whether current resource capabilities are sufficient before executing, the success of scheduled tasks and management tasks in a terminal device is improved. | 02-12-2009 |
20090044192 | OBJECT ORIENTED BASED, BUSINESS CLASS METHODOLOGY FOR GENERATING QUASI-STATIC WEB PAGES AT PERIODIC INTERVALS - A method for providing a requestor with access to dynamic data via quasi-static data requests, comprising the steps of defining a web page, the web page including at least one dynamic element, creating an executable digital code to be run on a computer and invoked at defined intervals by a scheduler component the executable code effective to create and storing a quasi-static copy of the defined web page, creating the scheduler component capable of invoking the executable code at predefined intervals, loading the executable code and the scheduler component onto a platform in connectivity with a web server and with one another, invoking execution of the scheduler component, and retrieving and returning the static copy of the defined web page in response to requests for the defined web page. | 02-12-2009 |
20090044193 | ENHANCED STAGEDEVENT-DRIVEN ARCHITECTURE - The present invention is an enhanced staged event-driven architecture (SEDA) stage. The enhanced SEDA stage can include an event queue configured to enqueue a plurality of events, an event handler programmed to process events in the event queue, and a thread pool coupled to the event handler. A resource manager further can be coupled to the thread pool and the event queue. Moreover, the resource manager can be programmed to allocate additional threads to the thread pool where a number of events enqueued in the event queue exceeds a threshold value and where all threads in the thread pool are busy. | 02-12-2009 |
20090049445 | METHOD, SYSTEM AND APPARATUS FOR TASK PROCESSING IN DEVICE MANAGEMENT - The disclosure provides a method, system and apparatus for task processing in device management so that a scheduled task may be triggered and executed normally, according to a predetermined triggering condition when the execution of the task is affected by a state of a terminal device or an operation of the terminal device. The method according to the invention includes steps of determining a scheduled task when the execution of the scheduled task is affected by a state of a terminal device or an operation of the terminal device; and prompting a user to select a processing manner for the scheduled task, and processing the affected scheduled task according to the user's selection, or processing the scheduled task in a predetermined processing manner. | 02-19-2009 |
20090055825 | WORKFLOW ENGINE SYSTEM AND METHOD - Provided is a workflow engine for managing data. More specifically, the workflow engine includes a receiving subsystem that is operable to receive data. An environment evaluating subsystem is also provided and is operable to evaluate an environment and determine at least one environmental parameter. A data evaluating system is in communication with the receiving subsystem and the environment evaluating subsystem. The data evaluating system is operable to determine at least one data parameter from the received data and to receive the environmental parameter. The data evaluating system will evaluate the data parameter and environment parameter and select at least one appropriate workflow rule for use in establishing a workflow job operation for execution by a job operation subsystem. An associated method of use is also provided. | 02-26-2009 |
20090055826 | Multicore Processor Having Storage for Core-Specific Operational Data - An integrated circuit includes a plurality of processor cores and a readable non-volatile memory that stores information expressive of at least one operating characteristic for each of the plurality of processor cores. Also disclosed is a method to operate a data processing system, where the method includes providing a multicore processor that contains a plurality of processor cores and a readable non-volatile memory that stores information, determined during a testing operation, that is indicative of at least a maximum operating frequency for each of the plurality of processor cores. The method further includes operating a scheduler coupled to an operating system and to the multicore processor, where the scheduler is operated to be responsive at least in part to information read from the memory to schedule the execution of threads to individual ones of the processor cores for a more optimal usage of energy. | 02-26-2009 |
20090055827 | POLLING ADAPTER PROVIDING HIGH PERFORMANCE EVENT DELIVERY - An apparatus and method for improving event delivery efficiency in a polling adapter system is configured to poll an enterprise information system (EIS) to obtain a list of events occurring in the EIS. Each event may be associated with an object key. These events may then be allocated to multiple delivery lists wherein events associated with the same object key are allocated to the same delivery list. Multiple delivery threads may then be generated, with each delivery thread being associated with a delivery list. Each delivery thread is configured to retrieve, from the EIS, events listed in the delivery list associated with the thread and deliver the events to a client. | 02-26-2009 |
20090064151 | METHOD FOR INTEGRATING JOB EXECUTION SCHEDULING, DATA TRANSFER AND DATA REPLICATION IN DISTRIBUTED GRIDS - Scheduling of job execution, data transfers, and data replications in a distributed grid topology are integrated. Requests for job execution for a batch of jobs are received, along with a set of job requirements. The set of job requirements includes data objects needed for executing the jobs, computing resources needed for executing the jobs, and quality of service expectations. Execution sites are identified within the grid for executing the jobs based on the job requirements. Data transfers needed for providing the data objects for executing the batch of jobs are determined, and data for replication is identified. A set of end-points is identified in the distributed grid topology for use in data replication and data transfers. A schedule is generated for data transfer, data replication and job execution in the grid in accordance with global objectives. | 03-05-2009 |
20090064152 | SYSTEMS, METHODS AND COMPUTER PRODUCTS FOR CROSS-THREAD SCHEDULING - Systems, methods and computer products for cross-thread scheduling. Exemplary embodiments include a cross thread scheduling method for compiling code, the method including scheduling a scheduling unit with a scheduler sub-operation in response to the scheduling unit being in a non-multithreaded part of the code and scheduling the scheduling unit with a cross-thread scheduler sub-operation in response to the scheduling unit being in a multithreaded part of the code. | 03-05-2009 |
20090070762 | SYSTEM AND METHOD FOR EVENT-DRIVEN SCHEDULING OF COMPUTING JOBS ON A MULTI-THREADED MACHINE USING DELAY-COSTS - A computer system includes N multi-threaded processors and an operating system. The N multi-threaded processors each have O hardware threads forming a pool of P hardware threads, where N, O, and P are positive integers and P is equal to N times O. The operating system includes a scheduler which receives events for one or more computing jobs. The scheduler receives one of the events and allocates R hardware threads of the pool of P hardware threads to one of the computing jobs by optimizing a sum of priorities of the computing jobs, where each priority is based in part on the number of logical processors requested by a corresponding computing job and R is an integer that is greater than or equal to 0. | 03-12-2009 |
20090070763 | METHOD AND SYSTEM FOR CHARACTERIZING ELEMENTS OF A PRINT PRODUCTION QUEUING MODEL - Methods and systems for characterizing performance of resources in a production environment are disclosed. Timing information for a plurality of print jobs may be received at a resource characterization system from one or more resources. A service time distribution may be determined based on the timing information. Resource performance for the one or more resources may be characterized based on the service time distribution using a queuing model. One or more performance characteristics may be provided for the one or more resources based on the characterized resource performance. | 03-12-2009 |
20090070764 | Handling queues associated with web services of business processes - A method and apparatus for handling queues associated with web services of a business process. The method may include automatically generating deployment descriptors for executing a business process as a web application, and determining a default queue for the business process using a business process management (BPM) configuration file. During execution of the business process, users are allowed to monitor the message load associated with the default queue. If a user decides to re-distribute the message load, the user is allowed to specify a new set of queues for the business process to improve performance of the business process at runtime. | 03-12-2009 |
20090077557 | METHOD AND COMPUTER FOR SUPPORTING CONSTRUCTION OF BACKUP CONFIGURATION - For a storage system which holds backup data of a first data storage extent in one or more second data storage extents in use of a first backup method, a backup status in a first backup method in a prescribed period is acquired and a first backup performance in a first backup configuration is computed based on this backup status. Meanwhile, a second backup performance in a second backup configuration is estimated based on a prescribed assumption in a prescribed period. Information is outputted based on the computed first backup performance and the estimated second backup performance. | 03-19-2009 |
20090077558 | Methods and apparatuses for heat management in information systems - In some embodiments, an information system is divided into sections, with one or more first computers located in a first section and one or more second computers located in a second section, including a first temperature sensor sensing a temperature condition for the first section and a second temperature sensor sensing a temperature condition for the second section. In some embodiments, when heat distribution determined from the first and second temperature conditions is not in conformance with a predetermined rule for heat distribution, the information system is configured to relocate a portion of the processing load of the first computers to the second computers, or vice versa, for bringing the heat distribution into conformance with the rule. In some embodiments, the effect of other equipment, such as storage system or switches in the sections is also considered, and loads on this equipment may also be relocated between sections. | 03-19-2009 |
20090077559 | System Providing Resources Based on Licensing Contract with User by Correcting the Error Between Estimated Execution Time from the History of Job Execution - A network system includes an application service provider (ASP) which is connected to the Internet and executes an application, and a CPU resource provider which is connected to the Internet and provides a processing service to a particular computational part (e.g., computation intensive part) of the application, wherein: when requesting a job from the CPU resource provider, the application service provider (ASP) sends information about estimated computation time of the job to the CPU resource provider via the Internet; and the CPU resource provider assigns the job by correcting this estimated computation time based on the estimated computation time sent from the application service provider (ASP). | 03-19-2009 |
20090077560 | Strongly-Ordered Processor with Early Store Retirement - In one embodiment, a processor comprises a retire unit and a load/store unit coupled thereto. The retire unit is configured to retire a first store memory operation responsive to the first store memory operation having been processed at least to a pipeline stage at which exceptions are reported for the first store memory operation. The load/store unit comprises a queue having a first entry assigned to the first store memory operation. The load/store unit is configured to retain the first store memory operation in the first entry subsequent to retirement of the first store memory operation if the first store memory operation is not complete. The queue may have multiple entries, and more than one store may be retained in the queue after being retired by the retire unit. | 03-19-2009 |
20090083740 | ASYNCHRONOUS EXECUTION OF SOFTWARE TASKS - A service broker for asynchronous execution of software. The broker functions include dynamically loading working modules from a specified directory, publishing the working module commands, receiving service requests from clients, and upon successful authentication and authorization, dispatching the requests to module command queues for scheduling and execution. The modules are invoked in separate domains so that management functions can control the modules independently. A management application facilitates interactive user scheduling of the actions being invoked. This can also be accomplished automatically according to business rules, for example. The management application also facilitates checking the progress on an action that is occurring, displaying errors that occur during the command execution, results of an action can also be displayed, and scheduling of requests. | 03-26-2009 |
20090083741 | Techniques for Accessing a Resource in a Processor System - A technique of accessing a resource includes receiving, at a master scheduler, resource access requests. The resource access requests are translated into respective slave state machine work orders that each include one or more respective commands. The respective commands are assigned, for execution, to command streams associated with respective slave state machines. The respective commands are then executed responsive to the respective slave state machines. | 03-26-2009 |
20090083742 | INTERRUPTABILITY MANAGEMENT VIA SCHEDULING APPLICATION - A system and methodology that facilitates management of user accessibility via a scheduling application is provided. A user can link or map interruptability levels to schedule entries, such as calendar entries or tasks thereby facilitating automatic communication management. Essentially, interruptability rules (and corresponding categories) can be associated to calendar entries and tasks thereby automating implementation of interruptability rules to manage communications received during calendar entries, tasks, meeting, appointments, etc. | 03-26-2009 |
20090083743 | System method and apparatus for binding device threads to device functions - A system apparatus and method for supporting one or more functions in an IO virtualization environment. One or more threads are dynamically associated with, and executing on behalf of, one or more functions in a device. | 03-26-2009 |
20090083744 | INFORMATION WRITING/READING SYSTEM, METHOD AND PROGRAM - An information writing/reading system includes a thread scheduler unit configured to control a sequence of execution for a plurality of threads, a thread execution unit, a device driver unit configured, a disk mechanism, an end time estimation unit configured to estimate an end time of execution of an issued write command, and a command management unit, wherein the thread scheduler unit is configured to temporarily suspend execution of at least one read thread of the plurality of threads if the command management unit determines that an estimated end time of execution of the issued write command is greater than an end time designated by the issued write command. | 03-26-2009 |
20090089784 | VARIABLE POLLING INTERVAL BASED ON HISTORICAL TIMING RESULTS - A method, system, and computer program product for computing an optimal time interval between polling requests to determine whether an asynchronous operation is completed, in a data processing system. A Polling Request Interval (PRI) utility determines the optimal time interval between successive polling requests, based on historical job completion results. The PRI utility first determines an average job time for previously completed operations. The PRI utility then retrieves a pair of preset configuration parameters including (1) a first parameter which provides the minimum time interval between successive polling requests; and (2) a second parameter which provides the fraction of the average task time added to the first parameter to obtain the time interval between (successive) polling requests. The PRI utility calculates the optimal time between polling requests based on the average job time and the retrieved configuration parameters. | 04-02-2009 |
20090089785 | SYSTEM AND METHOD FOR JOB SCHEDULING IN APPLICATION SERVERS - A method and a system for job scheduling in application servers. A common metadata of a job is deployed, the job being a deployable software component. An additional metadata of the job is further deployed. A scheduler task based on the additional metadata of the job is created, wherein the task is associated with a starting condition. The scheduler task is started at an occurrence of the starting condition, and, responsive to this an execution of an instance of the job is invoked asynchronously. | 04-02-2009 |
20090094607 | PROCESSING REQUEST CONTROL DEVICE, RECORDING MEDIUM STORING PROGRAM, PROCESSING REQUEST CONTROL METHOD AND DATA SIGNAL - A processing request control device, which includes: a reception section that receives a processing request and information on a property of the processing request; a calculation section that calculates a processing time zone based on the processing request; a management section that manages the processing request and the processing time zone associated with each other; a processing implementation control section that controls to implement, based on the processing request, the processing from a processing start time; a specification section that, when a new processing request is received, specifies a processing request being managed whose processing time zone overlaps with a processing time zone of the newly received processing request; and a change section that changes at least one of the processing time zone of the specified processing request and that of the new processing request within a range based on the properties of the processing request. | 04-09-2009 |
20090100429 | Dual Mode Operating System For A Computing Device - A computing device which runs non-pageable real time and pageable non-real time processes is provided with non-pageable real time and pageable non-real time versions of operating system services where the necessity to page in memory would block a real-time thread of execution. In one embodiment, a real time operating system service has all its code and data locked, and only supports clients that similarly have their code and data locked. This ensures that such a service will not block due to a page fault caused by client memory being unavailable. A non-real time operating system service does not have its data locked and supports clients whose memory can be paged out. In a preferred embodiment servers which are required to provide real time behaviour are multithreaded and arrange for requests from real time and non-real time clients to be serviced in different threads. | 04-16-2009 |
20090100430 | METHOD AND SYSTEM FOR A TASK AUTOMATION TOOL - Disclosed is a method and system for receiving a task list containing a task, determining if the task must be executed based on a context of a business scenario and executing the task. After executing the task, a result of execution of the task is analyzed based on the context of the business scenario and an operation to be performed is determined based on the result of the execution. | 04-16-2009 |
20090106759 | INFORMATION PROCESSING SYSTEM AND RELATED METHOD THEREOF - An information processing system includes a first electronic device, a second electronic device and a processing module. The first electronic device processes a first task. The second electronic device processes a second task. The processing module, controls, without utilizing an operating system, the second electronic device to process the second task for a first specific time period during which the first electronic device does not process the first task which was being processed before the first specific time period. | 04-23-2009 |
20090106760 | METHOD AND APPARATUS FOR SELECTING A WORKFLOW ROUTE - A method for selecting a workflow route that includes determining a next processing phase of the work sheet to be processed and the work sheet properties in the phase, querying a pre-configured mapping table between work sheet properties and processing owners according to the work sheet properties in the next processing phase, and obtaining a matched processing owner for the next processing phase of the work sheet to be processed. An apparatus for selecting a workflow route includes a work sheet predefining module, a processing owner mat ching module, an inputting module, and a matching module. The technical solution provided in an embodiment of the disclosure may solve the problem of too heavy workload and proneness to errors caused by manually selecting a processing owner for a phase of a work sheet and may also solve the problem of too many processes caused by binding processing owners with the processes. | 04-23-2009 |
20090113432 | METHOD AND SYSTEM FOR SIMULATING A MULTI-QUEUE SCHEDULER USING A SINGLE QUEUE ON A PROCESSOR - A method and system for scheduling tasks on a processor, the tasks being scheduled by an operating system to run on the processor in a predetermined order, the method comprising identifying and creating task groups of all related tasks; assigning the tasks in the task groups into a single common run-queue; selecting a task at the start of the run-queue; determining if the task at the start of the run-queue is eligible to be run based on a pre-defined timeslice allocated and on the presence of older starving tasks on the runqueue; executing the task in the pre-defined time slice; associating a starving status to all unexecuted tasks and running all until all tasks in the run-queue complete execution and the run-queue become empty. | 04-30-2009 |
20090113433 | THREAD CLASSIFICATION SUSPENSION - The exemplary embodiments provide a computer-implemented method, apparatus, and computer-usable program code for managing memory. A notice of a shortage of real memory is received. For each active thread, the thread classification of the active thread is compared to a global hierarchy of thread classifications to determine a thread to affect. The global hierarchy of thread classifications defines the relative importance of each thread classification. An action to take for the determined thread is determined. The determined action is performed for the determined thread. | 04-30-2009 |
20090113434 | APPARATUS, SYSTEM AND METHOD FOR RAPID RESOURCE SCHEDULING IN A COMPUTE FARM - Disclosed herein is a method for scheduling computing jobs for a compute farm. The method includes: receiving a plurality of computing jobs at a scheduler; assigning a signature to each computing job based on at least one computing resource requirement of the computing job; storing each computing job in a signature classification corresponding to the signature of the computing job; and scheduling at least one of the plurality of computing jobs for processing in the compute farm as a function of the signature classification. | 04-30-2009 |
20090113435 | INTEGRATED BACKUP WITH CALENDAR - A computer implemented method, apparatus, and computer program product for automatically scheduling execution of a process using information in a calendar. Entries in a set of electronic calendars associated with a set of users are analyzed to generate expected computer usage patterns for the set of users. A low usage time interval for a computer is identified using the expected computer usage patterns. The low usage time interval for the computer is a time interval when expected usage of the computer by the set of users does not exceed a threshold amount of usage. The process is automatically executed during the low usage time interval. | 04-30-2009 |
20090113436 | Techniques for switching threads within routines - Various technologies and techniques are disclosed for switching threads within routines. A controller routine receives a request from an originating routine to execute a coroutine, and executes the coroutine on an initial thread. The controller routine receives a response back from the coroutine when the coroutine exits based upon a return statement. Upon return, the coroutine indicates a subsequent thread that the coroutine should be executed on when the coroutine is executed a subsequent time. The controller routine executes the coroutine the subsequent time on the subsequent thread. The coroutine picks up execution at a line of code following the return statement. Multiple return statements can be included in the coroutine, and the threads can be switched multiple times using this same approach. Graphical user interface logic and worker thread logic can be co-mingled into a single routine. | 04-30-2009 |
20090119668 | DYNAMIC FEASIBILITY ANALYSIS FOR EVENT BASED PROGRAMMING - Embodiments of the present invention provide a method, system and computer program product for dynamic feasibility analysis of event-driven program code. In an embodiment of the invention, a method for a dynamic feasibility analysis of event-driven program code can be provided. The method can include loading multiple different tasks associated with different registered events in event-driven program code of an event-driven application, reducing overlapping ones of the registered events for different ones of the tasks to a single task of the overlapping events to produce a reduced set of tasks and corresponding events, ordering the corresponding events of the reduced set of tasks and grouping the corresponding events by time slice for the event-driven application, and reporting whether or not adding a new event to a particular time slice for the event-driven application results in a depth of events in the particular time slice exceeding a capacity of the particular time slice rendering the event-driven application infeasible. | 05-07-2009 |
20090119669 | User-specified configuration of scheduling services - Methods and systems for facilitating user-specified configuration of scheduling services in a manufacturing facility. In one embodiment, a workflow user interface is presented to allow a user to specify a workflow for providing a scheduling for a manufacturing facility. The workflow identifies a sequence of operations to be performed for providing the schedule. In addition, the user can specify properties for each operation in the workflow user interface. The workflow with the properties are then stored in a repository for subsequent execution in response to a workflow trigger. | 05-07-2009 |
20090119670 | METHOD OF CONSTRUCTING AND EXECUTING PROCESS - Disclosed is a method of constructing and executing a process. A conventional process is minutely divided into minimum unit subprocesses, and the minutely divided subprocesses are classified into a decision subprocesses and a routine subprocess by whether they require decision-making. Any subprocess which is executable using the setup condition in a specific decision subprocess is classified into the routine subprocess in such a manner that the classified routine subprocess follows on the specific decision subprocess. One or a series of decision subprocesses are combined with one or a series of routine subprocesses which are executable on the condition of the completion of the decision subprocesses to form one unit process, and a job-support computer program is created to allow the plurality of subprocesses included in the one unit process to be successively executed. A plurality of subprocesses which are executable in accordance with common input data are detected from the minutely divided minimum unit subprocesses, and a job flow is constructed to allow the respective jobs in the plurality of subprocesses to be simultaneously initiated and executed in parallel. The present invention can drastically reduce the lead-time of a process while facilitating execution of the entire process with high efficiency. | 05-07-2009 |
20090125908 | Hardware Port Scheduler - According to one embodiment, an apparatus is disclosed. The apparatus includes a port having a plurality of lanes, a plurality of protocol engines. Each protocol engine is associated with one of the plurality of lanes, and processes tasks to be forwarded to a plurality of remote nodes. The apparatus also includes a first port task scheduler (PTS) to manage the tasks to be forwarded to the one or more of the plurality of protocol engines. The first PTS includes a register to indicate which of the plurality of protocol engines the first PTS is to support. | 05-14-2009 |
20090133021 | METHODS AND SYSTEMS FOR EFFICIENT USE AND MAPPING OF DISTRIBUTED SHARED RESOURCES - Methods and systems for coordinating sharing of resources among a plurality of tasks operating in parallel in a document presentation environment while host communications and task processing may be performed asynchronously with respect to one another. A mapped resource manager manages activation (addition) and deactivation (deletion) of resources shared by a plurality of tasks operating in parallel to assure that each task may continue processing with a consistent set of files as resources despite changes made by other tasks or by operator intervention. | 05-21-2009 |
20090133022 | Multiprocessing apparatus, system and method - An apparatus to isolate a main memory in a multiprocessor computer is provided. The apparatus include a master processor and a management device communicating with the master processor. One or more slave processors communicate with the master processor and the management device. A volatile memory also communicates with the management device and the main memory communicating with the volatile memory. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules that allow a reader to quickly ascertain the subject matter of the disclosure contained herein. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims. | 05-21-2009 |
20090133023 | High Performance Queue Implementations in Multiprocessor Systems - Systems and methods provide a single reader single writer (SRSW) queue structure having entries that can be concurrently accessed in an atomic manner with a single memory access. The SRSW queues may be combined to create more complicated queues, including multiple reader single writer (MRSW), single reader multiple writer (SRMW), and multiple reader multiple writer (MRMW) queues. | 05-21-2009 |
20090133024 | Scheduling a Workload Based on Workload-Related Variables and Triggering Values - A mechanism is provided for scheduling a workload on a computer. The mechanism receives, in the computer, one or more workload-related variables. The mechanism further receives, in the computer, one or more trigger values for at least one of the one or more workload-related variables. Moreover, the mechanism determines, from the workload-related variables and their triggering values, one or more conditions under which one or more tasks are to be performed on the computer. In addition, the mechanism acquires a status value of at least one of the one or more workload-related variables at regular intervals and performs a task when a status value of a workload-related variable attains the triggering value for the task. | 05-21-2009 |
20090133025 | METHODS AND APPARATUS FOR BANDWIDTH EFFICIENT TRANSMISSION OF USAGE INFORMATION FROM A POOL OF TERMINALS IN A DATA NETWORK - Methods and apparatus for bandwidth efficient transmission of usage information from a pool of terminals in a data network. A device includes transceiver logic to receive usage tracking and reporting parameters, wherein the usage tracking parameters identify events to be tracked and the reporting parameters identify reporting criteria for each event, scheduling logic to track the events based on the usage tracking parameters to produce a tracking log, reporting logic to process the tracking log based on the reporting parameters to produce a reporting log, and the transceiver logic to transmit the reporting log. A server includes processing logic to generate usage tracking parameters that identify events to be tracked and reporting parameters that identify reporting criteria for each event and a transceiver to transmit the usage tracking parameters and the reporting parameters to one or more terminals. | 05-21-2009 |
20090138878 | ENERGY-AWARE PRINT JOB MANAGEMENT - A printing system and method for processing print jobs in a network of printers are disclosed. The printers each have high and low operational states. A job ticket is associated with each print job. The job ticket designates one of the network printers as a target printer for printing the job and includes print job parameters related to redirection and delay for the print job. Where the target printer for the print job is in the low operational state, the print job related redirection and delay parameters for the job are identified. Based on the identified parameters, the print job may be scheduled for at least one of redirection and delay, where the parameters for redirection/delay permit, whereby the likelihood that the print job is printed sequentially with another print job on one of the network printers, without that one printer entering an intervening low operational state, is increased. | 05-28-2009 |
20090138879 | Clock Control - The present invention provides a processor comprising: an execution unit arranged to execute a plurality of program threads, clock generating means for generating first and second clock signals, and storage means for storing at least one thread-specific clock-control bit. The execution unit is configured to execute a first one of the threads in dependence on the first clock signal and to execute a second one of the threads in dependence on the second clock signal. The clock generating means is operable to generate the second clock signal with the second frequency selectively differing from the first frequency in dependence on the at least one clock-control bit. A corresponding method and computer program product are also provided. | 05-28-2009 |
20090144738 | Performance Evaluation of Algorithmic Tasks and Dynamic Parameterization on Multi-Core Processing Systems - Apparatus for evaluating the performance of DMA-based algorithmic tasks on a target multi-core processing system includes a memory and at least one processor coupled to the memory. The processor is operative: to input a template for a specified task, the template including DMA-related parameters specifying DMA operations and computational operations to be performed; to evaluate performance for the specified task by running a benchmark on the target multi-core processing system, the benchmark being operative to generate data access patterns using DMA operations and invoking prescribed computation routines as specified by the input template; and to provide results of the benchmark indicative of a measure of performance of the specified task corresponding to the target multi-core processing system. | 06-04-2009 |
20090150887 | Process Aware Change Management - A change order to be executed at a scheduled time as part of a change plan is created, wherein the change order to define a change to an Information Technology (IT) environment. The change order is validated against validation rules to simulate execution of the change order at the scheduled time wherein other change orders scheduled to execute before the execution of the change order are included in the simulation. Breaks in change orders scheduled to execute after the change order are detected. Side effects caused by execution of the change order are determined. The results of validating the change order are output. | 06-11-2009 |
20090150888 | EMBEDDED OPERATING SYSTEM OF SMART CARD AND THE METHOD FOR PROCESSING THE TASK - An embedded operating system of smart card and the method for processsing task are disclosed. The method includes: A, initializing the system; B, creating at least one task according to the function set by the system; C, scheduling the pre-execution task according to the priority of the system; D, executing the task and returning the executing result through a data transmission channel. The invention enchances the support of the data channel of the hardware platform, and not only supports the single data channel, ISO7816, of conventional smart cards, but also supports the status of two or more data channels coexisting, in order to make the smart card transmit the information more flexible with higher speed with device terminals. The invention enchances the support of application of smart card, and not only supports the single application on the conventional smart card, but also supports several applications running simultaneity on one card, in order to utilize the smart card with higher efficiency. | 06-11-2009 |
20090150889 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND DEVICE AND PROGRAM USED FOR THE INFORMATION PROCESSING SYSTEM AND THE INFORMATION PROCESSING METHOD - An information processing terminal is provided with a data acquiring means for reading data from an external recording medium; a program storing means for storing a plurality of application programs; a program executing means for executing the stored application programs; and a program selecting means for selecting the application program to be executed by the program executing means. The program selecting means selects the application program to be executed from the programs stored in the program storing means, corresponding to the data acquired through the data acquiring means, and processes the data acquired through the data acquiring means by the application program selected by the program selecting means. | 06-11-2009 |
20090150890 | STRAND-BASED COMPUTING HARDWARE AND DYNAMICALLY OPTIMIZING STRANDWARE FOR A HIGH PERFORMANCE MICROPROCESSOR SYSTEM - Strand-based computing hardware and dynamically optimizing strandware are included in a high performance microprocessor system. The system operates in real time automatically and unobservably to parallelize single-threaded software into a plurality of parallel strands for execution by cores implemented in a multi-core and/or multi-threaded microprocessor of the system. The microprocessor executes a native instruction set tailored for speculative multithreading. The strandware directs hardware of the microprocessor to collect dynamic profiling information while executing the single-threaded software. The strandware analyzes the profiling information for the parallelization, and uses binary translation and dynamic optimization to produce native instructions to store in a translation cache later accessed to execute the produced native instructions instead of some of the single-threaded software. The system is capable of parallelizing a plurality of single-threaded software applications (e.g. application software, device drivers, operating system routines or kernels, and hypervisors). | 06-11-2009 |
20090158282 | Hardware accelaration for large volumes of channels - A method apparatus and system for hardware acceleration for large volumes of channels is described. In an embodiment, the invention is a method. The method includes monitoring an inbound queue for hardware jobs. The method further includes detecting an interrupt from a hardware component. The method also includes transferring a job from the inbound queue to the hardware component. The method may further include transferring a completed job from the hardware component to an outbound queue. The method may also include providing an indication of completion of a job in an outbound queue. | 06-18-2009 |
20090158283 | DECOUPLING STATIC PROGRAM DATA AND EXECUTION DATA - Persisting execution state of a continuation based runtime program. The continuation based runtime program includes static program data defining activities executed by the program. One or more of the activities are parent activities including sequences of child activities. The continuation based runtime program is loaded. A child activity to be executed is identified based on scheduling defined in a parent of the child activity in the continuation based runtime program. The child activity is sent to a continuation based runtime separate from one or more other activities in the continuation based runtime program. The child activity is executed at the continuation based runtime, creating an activity instance. Continuation state information is stored separate from the static program data by storing information about the activity instance separate from one or more other activities defined in the continuation based runtime program. | 06-18-2009 |
20090158284 | SYSTEM AND METHOD OF PROCESSING SENDER REQUESTS FOR REMOTE REPLICATION - A system and a method of processing sender requests for remote replication are applied in local system having a plurality of network block devices (NBD). A fixed number of sender threads are created in local system to form sender thread pool. All NBDs receiving write request for corresponding remote mirror volume are serially connected to be circular linked list. A pointer is set to sequentially record latest processed NBD in circular linked list, the sender threads in the sender thread pool are allocated to actively search NBD to be processed pointed by the pointer according to a sequence in circular linked list, and processing of NBD pointed by the pointer is locked by using the sender thread, hence processing the sender request of NBD. Each time when the sender request is finished, the pointer is sequentially moved to next NBD and the sender request of corresponding NBD is performed. | 06-18-2009 |
20090158285 | APPARATUS AND METHOD FOR CONTROLLING RESOURCE SHARING SCHEDULE IN MULTI-DECODING SYSTEM - An apparatus for controlling a resource sharing schedule in a multi-decoding system including a multi-decoder formed of a plurality of resources, the apparatus including: a storage unit storing status information of the resources and information required in controlling the resource sharing schedule; and a controller, when a source resource requests assignment of a target resource, assigning the target resource, outputting information of the target resource to the source resource, and updating statuses of the resources, wherein the apparatus controls the resource sharing schedule while bidirectionally connected to the resources to share the resources between the multi-decoders. Accordingly, it is possible to reduce an overall decoding time and controlling a resource usage schedule. | 06-18-2009 |
20090158286 | FACILITY FOR SCHEDULING THE EXECUTION OF JOBS BASED ON LOGIC PREDICATES - A solution for scheduling execution of jobs in a data processing system is disclosed. One method for implementing such a solution may start by providing a scheduling structure for scheduling the execution of jobs. Such a scheduling structure may include a workflow plan defining a flow of execution for planned jobs and/or a workflow model defining static policies for execution of modeled jobs. A set of rules for updating the scheduling structure is provided. The method may continue by updating the scheduling structure according to the rules, such as by adding or removing jobs for rules evaluated to be true. The execution of the jobs may then be scheduled according to the updated scheduling structure. A corresponding system and computer program product are also disclosed. | 06-18-2009 |
20090158287 | DYNAMIC CRITICAL PATH UPDATE FACILITY - A method is presented for dynamically selecting and updating a critical execution path. The method may include receiving a network of jobs for execution. One or more critical jobs may be included in the network of jobs. A job causing a delay in the execution of the network of jobs may be detected, where the job precedes the critical job. A critical path in the network of jobs may then be determined as a function of the job causing a delay. Determination of the critical path may be further based on a slack time associated with jobs in the network that have planned execution times preceding a planned execution time for the critical job. | 06-18-2009 |
20090165000 | Multiple Participant, Time-Shifted Dialogue Management - A virtual environment server. The server manages time-shifted presentation data between multiple participants in a shared virtual environment system. The server includes a routing module configurable for coupling to multiple participants, a real-time data management module coupled to the routing module, a time-shifted data management module coupled to the routing module, and a data store module coupled to the real-time data management module and to the time-shifted data management module. Participant output presentation data is received from the participants, stored as real-time presentation data, and transferred to appropriate participants. In response to requests from a requesting participant to obtain time-shifted presentation data from a time-shifted participant and any influence participants, time-shifted presentation data is retrieved from the data store module and transferred to the requesting participant. Influence participants are participants whose input presentation data are influenced by time-shifted participant and whose output presentation data influence presentation environment of requesting participant. | 06-25-2009 |
20090165001 | Timer Patterns For Process Models - The subject matter disclosed herein provides methods and apparatus, including computer program products, for providing timers for tasks of process models. In one aspect, an input representative of a temporal constraint for a task of a graph-process model may be received. The temporal constraint defines at least one of a delay or a deadline. The task may be associated with the temporal constraint created based on the received input. The temporal constraint defined to have a placement at the graph-process model based on the type of temporal constraint. The task and the temporal constraint may be provided to configure the process model. Related systems, apparatus, methods, and/or articles are described. | 06-25-2009 |
20090165002 | METHOD AND SYSTEM FOR MODULE INITIALIZATION - A method for initializing a module that includes identifying a module for initialization and performing a plurality of processing phases on the module and all modules in a dependency graph of the module. Performing the processing phases includes, for each module, executing a processing phase of the plurality of processing phases on the module, determining whether the processing phase has been executed on all modules in a dependency graph of the module, and when the processing phase has been executed for all modules in the dependency graph of the module, executing a subsequent processing phase of the plurality of processing phases on the module, wherein at least one processing phase of the plurality of processing phases includes executing custom initialization code. | 06-25-2009 |
20090165003 | SYSTEM AND METHOD FOR ALLOCATING COMMUNICATIONS TO PROCESSORS AND RESCHEDULING PROCESSES IN A MULTIPROCESSOR SYSTEM - In a multiprocessor system, a system and method assigns communications to processors, processes, or subsets of types of communications to be processed by a specific processor without using a locking mechanism specific to the resources required for assignment. The system and method can reschedule processes to run on the processor on which the assignment is made. | 06-25-2009 |
20090165004 | Resource-aware application scheduling - In one embodiment, a method provides capturing resource monitoring information for a plurality of applications; accessing the resource monitoring information; and scheduling at least one of the plurality of applications on a selected processing core of a plurality of processing cores based, at least in part, on the resource monitoring information. | 06-25-2009 |
20090165005 | TASK EXECUTION APPARATUS, TASK EXECUTION METHOD, AND STORAGE MEDIUM - A task execution apparatus includes an execution unit configured to execute a task on a plurality of devices, an acquisition unit configured to acquire a cause of failure in execution by the execution unit, a confirmation unit configured to confirm that each device of the plurality of devices on which the execution unit failed to execute the task does not support the task based on the cause, and a re-execution unit configured to re-execute the task on each of the plurality of devices on which the execution unit failed to execute the task, wherein the re-execution unit excludes each of the plurality of devices from a re-execution target of the task, in a case where the confirmation unit confirms that each of the plurality of devices does not support the task. | 06-25-2009 |
20090165006 | DETERMINISTIC MULTIPROCESSING - A hardware and/or software facility for controlling the order of operations performed by threads of a multithreaded application on a multiprocessing system is provided. The facility may serialize or selectively-serialize execution of the multithreaded application such that, given the same input to the multithreaded application, the multiprocessing system deterministically interleaves operations, thereby producing the same output each time the multithreaded application is executed. The facility divides the execution of the multithreaded application code into two or more quantum specifying a deterministic number of operations, and the facility specifies a deterministic order in which the threads execute the two or more quantum. The facility may operate together with a transactional memory system. When the facility operates together with a transactional memory system, each quantum is encapsulated in a transaction that, may be executed concurrently with other transactions, and is committed according to the specified deterministic order. | 06-25-2009 |
20090172679 | CONTROL APPARATUS, STORAGE SYSTEM, AND MEMORY CONTROLLING METHOD - In order to more efficiently use a cache memory to realize improved response ability in a storage system, there provided are a cache memory which stores the data read from the storage apparatus, an access monitoring unit which monitors a state of access from the upper apparatus to the data stored in the storage apparatus, a schedule information creating unit which creates schedule information that determines contents to be stored in the cache memory based on the access state, and a memory controlling unit which controls record-processing of the data from the storage apparatus to the cache memory and removal-processing of the data from the cache memory based on the schedule information. | 07-02-2009 |
20090172680 | Discovery Directives - A mechanism for configuring and scheduling logical discovery processes in a data processing system is provided. A discovery engine communicates with information providers to collect discovery data. An information provider is a software component whose responsibility is to discover resources and relationships between the resources and write their representations in a persistent store. Discovery directives are used to coordinate the execution of information providers. | 07-02-2009 |
20090178043 | SWITCH-BASED PARALLEL DISTRIBUTED CACHE ARCHITECTURE FOR MEMORY ACCESS ON RECONFIGURABLE COMPUTING PLATFORMS - A computing architecture comprises a plurality of processing elements to perform data processing calculations, a plurality of memory elements to store the data processing results, and a reconfigurable interconnect network to couple the processing elements to the memory elements. The reconfigurable interconnect network includes a switching element, a control element, a plurality of processor interface units, a plurality of memory interface units, and a plurality of application control units. In various embodiments, the processing elements and the interconnect network may be implemented in a field-programmable gate array. | 07-09-2009 |
20090178044 | FAIR STATELESS MODEL CHECKING - Techniques for providing a fair stateless model checker are disclosed. In some aspects, a schedule is generated to allocate resources for threads of a multi-thread program in lieu of having an operating system allocate resources for the threads. The generated schedule is both fair and exhaustive. In an embodiment, a priority graph may be implemented to reschedule a thread when a different thread is determined not to be making progress. A model checker may then implement one of the generated schedules in the program in order to determine if a bug or a livelock occurs during the particular execution of the program. An output by the model checker may facilitate identifying bugs and/or livelocks, or authenticate a program as operating correctly. | 07-09-2009 |
20090187908 | OPTIMIZED METHODOLOGY FOR DISPOSITIONING MISSED SCHEDULED TASKS - The present invention provides for a method and system for the disposition of tasks which failed to run during their originally scheduled time. The determination of whether to run missed or delayed tasks is based on calculated ratios rather than on fixed window sizes. A Lateness Ratio is calculated to determine if the time elapsed between the missed task and the scheduled run time is small enough to still allow a late task to run. A Closeness Ratio is calculated to determine if the next available run time for the missed task is close enough to the next scheduled execution of the task that the missed task will be run in place of the upcoming scheduled task. Each ratio is compared to a user defined ratio limit, so if the calculated ratio does not exceed the limit, then the missed task is executed at the first available opportunity. | 07-23-2009 |
20090187909 | SHARED RESOURCE BASED THREAD SCHEDULING WITH AFFINITY AND/OR SELECTABLE CRITERIA - Threads may be scheduled to be executed by one or more cores depending upon whether it is more desirable to minimize power or to maximize performance. If minimum power is desired, threads may be schedule so that the active devices are most shared; this will minimize the number of active devices at the expense of performance. On the other hand, if maximum performance is desired, threads may be scheduled so that active devices are least shared. As a result, threads will have more active devices to themselves, resulting in greater performance at the expense of additional power consumption. Thread affinity with a core may also be taken into consideration when scheduling threads in order to improve the power consumption and/or performance of an apparatus. | 07-23-2009 |
20090187910 | METHOD AND SYSTEM FOR AUTOMATED SCHEDULE CONTROL - A method for automated schedule control is disclosed. When a schedule appointment process is performed, an open services gateway initiative framework of an electronic device performs an automatic schedule control operation, detecting whether an execution for a schedule is required. If required, it is determined whether the schedule is an update operation. If so, a start or stop operation for a bundle corresponding to the schedule is performed. If not, the electronic device connects to a remote database at a preset time to determine whether a new manifest for the bundle corresponding to the schedule is detected. If detected, the new manifest is retrieved from the remote database and the bundle is accordingly updated thereto. | 07-23-2009 |
20090187911 | Computer device with reserved memory for priority applications - A computer device comprises a processor, a memory, and an operating system kernel. The kernel comprises instructions for managing the execution of processes and for allocating memory to such processes. The device is able to execute stored applications that can be broken down into processes. The device comprises a special instruction sequence able to create an inactive process with reservation of a certain quantity of memory, and an application launcher, arranged in such a manner as to remove the inactive process, thus freeing up the reserved memory, which is followed consecutively by commanding the launch of at least one particular application. The memory reserved beforehand is thus made quickly available for execution of the particular application. | 07-23-2009 |
20090193423 | WAKEUP PATTERN-BASED COLOCATION OF THREADS - A method of co-locating threads and corresponding system are described. The method comprises a first thread executing on a first processor awakening a second thread for execution on a second processor and assigning the second thread to execute on the first processor based on a determination that the first thread awakened the second thread at a prior awakening of the second thread. | 07-30-2009 |
20090199189 | Parallel Lock Spinning Using Wake-and-Go Mechanism - A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism recognizes a programming idiom that indicates that a thread is spinning on a lock. The wake-and-go mechanism updates a wake-and-go array with a target address associated with the lock and sets a lock bit in the wake-and-go array. The thread then goes to sleep until the lock frees. The wake-and-go array may be a content addressable memory (CAM). When a transaction appears on the symmetric multiprocessing (SMP) fabric that modifies the value at a target address in the CAM, the CAM returns a list of storage addresses at which the target address is stored. The wake-and-go mechanism associates these storage addresses with the threads waiting for an even at the target addresses, and may wake the thread that is spinning on the lock. | 08-06-2009 |
20090199190 | System and Method for Priority-Based Prefetch Requests Scheduling and Throttling - A method, processor, and data processing system for implementing a framework for priority-based scheduling and throttling of prefetching operations. A prefetch engine (PE) assigns a priority to a first prefetch stream, indicating a relative priority for scheduling prefetch operations of the first prefetch stream. The PE monitors activity within the data processing system and dynamically updates the priority of the first prefetch stream based on the activity (or lack thereof). Low priority streams may be discarded. The PE also schedules prefetching in a priority-based scheduling sequence that corresponds to the priority currently assigned to the scheduled active streams. When there are no prefetches within a prefetch queue, the PE triggers the active streams to provide prefetches for issuing. The PE determines when to throttle prefetching, based on the current usage level of resources relevant to completing the prefetch. | 08-06-2009 |
20090204970 | DISTRIBUTED DOCUMENT HANDLING SYSTEM - Disclosed is a networked reproduction system comprising connected scanners, printers and servers. A reproduction job to be carried out includes a number of subtasks. For the execution of these subtasks, services distributed over the network are available. A service management system selects appropriate services and links them to form paths that are able to fulfill the reproduction job. The user may define additional constraints that apply to the job. A path, optimal with respect to constraints, is selected. | 08-13-2009 |
20090210878 | SYSTEM AND METHOD FOR DATA MANAGEMENT JOB PLANNING AND SCHEDULING WITH FINISH TIME GUARANTEE - A method is disclosed for scheduling data management jobs on a computer system that uses a dual level scheduling method. Macro level scheduling using a chained timer schedules the data management job for execution in the future. Micro level scheduling using an algorithm controls the actual dispatch of the component requests of a data management job to minimize impact on foreground programs. | 08-20-2009 |
20090217275 | PIPELINING HARDWARE ACCELERATORS TO COMPUTER SYSTEMS - A method of pipelining hardware accelerators of a computing system includes associating hardware addresses to at least one processing unit (PU) or at least one logical partition (LPAR) of the computing system, receiving a work request for an associated hardware accelerator address, and queuing the work request for a hardware accelerator using the associated hardware accelerator address. | 08-27-2009 |
20090217276 | METHOD AND APPARATUS FOR MOVING THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step. | 08-27-2009 |
20090217277 | USE OF CPI POWER MANAGEMENT IN COMPUTER SYSTEMS - A device, system, and method are directed towards managing power consumption in a computer system with one or more processing units, each processing unit executing one or more threads. Threads are characterized based on a cycles per instruction (CPI) characteristic of the thread. A clock frequency of each processing unit may be configured based on the CPI of each thread assigned to the processing unit. In a system wherein higher clock frequencies consume greater amounts of power, the CPI may be used to determine a desirable clock frequency. The CPI of each thread may also be used to assign threads to each processing unit, so that threads having similar characteristics are grouped together. Techniques for assigning threads and configuring processor frequency may be combined to affect performance and power consumption. Various specifications or factors may also be considered when scheduling threads or determining processor frequencies. | 08-27-2009 |
20090222825 | DATA RACE DETECTION IN A CONCURRENT PROCESSING ENVIRONMENT - A method for detecting race conditions in a concurrent processing environment is provided. The method comprises implementing a data structure configured for storing data related to at least one task executed in a concurrent processing computing environment, each task represented by a node in the data structure; and assigning to a node in the data structure at least one of a task number, a wait number, and a wait list; wherein the task number uniquely identifies the respective task, wherein the wait number is calculated based on a segment number of the respective task's parent node, and wherein the wait list comprises at least an ancestor's wait number. The method may further comprise monitoring a plurality of memory locations to determine if a first task accesses a first memory location, wherein said first memory location was previously accessed by a second task. | 09-03-2009 |
20090222826 | System and Method for Managing the Deployment of an Information Handling System - A system and method for automated deployment of an information handling system are disclosed. A method for managing the deployment of an information handling system may include executing a deployment application on an information handling system, the deployment application including one or more tasks associated with the deployment of the information handling system. The method may further include automatically determining for a particular task whether an execution time for the particular task is within a predetermined range of execution times. The method may further include automatically performing an error-handling task in response to determining that the execution time for the particular task is not within the predetermined range of execution times. | 09-03-2009 |
20090222827 | CONTINUATION BASED DECLARATIVE DEFINITION AND COMPOSITION - Declarative definition and composition of activities of a continuation based runtime. When formulating such a declarative activity of a continuation-based runtime, the activity may be formulated in accordance with a declarative activity schema and include a properties portion that declaratively defines one or more interface parameters of the declarative activity, and a body portion that declaratively defines an execution behavior of the declarative activity. The declarative activities may be hierarchically structured such that a parent declarative activity may use one or more child activities to define its behavior, where one or more of the child activities may also be defined declaratively. | 09-03-2009 |
20090222828 | MANAGEMENT PLATFORM AND ASSOCIATED METHOD FOR MANAGING SMART METERS - The present invention relates to a management platform for monitoring and managing one or more smart meters. The management platform comprises means for communicating with smart meters and a workflow handler for executing a workflow. A workflow specifies a process for management of the smart meters. | 09-03-2009 |
20090222829 | METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM - A data access request to a file system is decomposed into a plurality of lower-level I/O tasks. A logical combination of physical storage components is represented as a hierarchical set of objects. A parent I/O task is generated from a first object in response to the data access request. A child I/O task is generated from a second object to implement a portion of the parent I/O task. The parent I/O task is suspended until the child I/O task completes. The child I/O task is executed in response to an occurrence of an event that a resource required by the child I/O task is available. The parent I/O task is resumed upon an event indicating completion of the child I/O task. Scheduling of any child I/O task is not conditional on execution of the parent I/O task, and a state diagram regulates the child I/O tasks. | 09-03-2009 |
20090235259 | Synchronous Adaption of Asynchronous Modules - A program disposed on a computer readable medium, having a main program with a first routine for issuing commands in an asynchronous manner and a second routine for determining whether the commands have been completed in an asynchronous manner. An auxiliary program adapts the main program to behave in a synchronous manner, by receiving control from the first routine, waiting a specified period of time with a wait routine, passing control to the second routine to determine whether any of the commands have been completed during the specified period of time, receiving control back from the second routine, and determining whether all of the commands have been completed. When all of the commands have not been completed, then the auxiliary program passes control back to the wait routine. When all of the commands have been completed, then the auxiliary program ends. | 09-17-2009 |
20090235260 | Enhanced Control of CPU Parking and Thread Rescheduling for Maximizing the Benefits of Low-Power State - A system may comprise a plurality of processing units and a scheduler configured to maintain a record for each respective processing unit. Each respective record may comprise entries which may indicate 1) how long the respective processing unit has been residing in an idle state, 2) a present power-state in which the respective processing unit resides, and 3) whether the respective processing unit is a designated default (bootstrap) processing unit. The scheduler may select one or more of the plurality of processing units according to their respective records, and assign impending instructions to be executed on the selected one or more processing units. Where additional processing units are required, the scheduler may also insert an instruction to trigger an inter-processor interrupt to transition one or more processing units out of idle-state. The scheduler may then assign some impending instructions to these one or more processing units. | 09-17-2009 |
20090235261 | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING APPARATUS, AND CONTROL METHOD OF IMAGE PROCESSING APPARATUS - An image processing system capable of enhancing the reliability of secret leakage prevention, which includes an image processing apparatus, an access control apparatus that issues authority information on each user, and a job history management apparatus that manages job histories. Authority information on a user logging in the image processing apparatus is acquired. With reference to the authority information, whether or not a job for which an execution instruction is given by the user is executable is determined. If executable, the job is executed. If the job is not executable, whether or not the job is executable on condition that a job history is transmitted to the job history management apparatus is further determined. If conditionally executable, the job is executed, and a history of the executed job is acquired and transmitted to the job history management apparatus. | 09-17-2009 |
20090235262 | EFFICIENT DETERMINISTIC MULTIPROCESSING - A hardware and/or software facility for controlling the order of operations performed by threads of a multithreaded application on a multiprocessing system is provided. The facility may serialize or selectively-serialize execution of the multithreaded application such that, given the same input to the multithreaded application, the multiprocessing system deterministically interleaves operations, thereby producing the same output each time the multithreaded application is executed. The facility divides the execution of the multithreaded application code into two or more quantum specifying a deterministic number of operations, and the facility specifies a deterministic order in which the threads execute the two or more quantum. The deterministic number of operations may be adapted to follow the critical path of the multithreaded application. Specified memory operations may be executed regardless of the deterministic order, such as those accessing provably local data. The facility may provide dynamic bug avoidance and sharing of identified bug information. | 09-17-2009 |
20090235263 | JOB ASSIGNMENT APPARATUS, JOB ASSIGNMENT METHOD, AND COMPUTER-READABLE MEDIUM - A management node at first extracts free computation nodes executing none of jobs in order to assign a new job to any one of computation nodes, and specifies a communication target computation node when executing an execution target job. Subsequently, the management node calculates, with respect to all of the computation nodes executing none of the jobs at that point of time, a determination value V | 09-17-2009 |
20090249343 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR RECEIVING TIMER OBJECTS FROM LOCAL LISTS IN A GLOBAL LIST FOR BEING USED TO EXECUTE EVENTS ASSOCIATED THEREWITH - A system, method, and computer program product are provided for receiving timer objects from local lists in a global list for being used to execute events associated therewith. A plurality of execution contexts are provided for receiving timer objects. Additionally, a plurality of local lists are provided, each corresponding with one of the execution contexts, for receiving the timer objects therefrom. Furthermore, a global list is provided for receiving the timer objects from the local lists for being used to execute events associated therewith. | 10-01-2009 |
20090249344 | METHOD AND APPARATUS FOR THREADED BACKGROUND FUNCTION SUPPORT - The present invention provides a computer implemented method and apparatus for a built-in function of a shell to execute in a thread of an interactive shell process. The data processing system receives a request to execute the built-in function. The data processing system determines that the request includes a thread creating indicator. The data processing system schedules a thread to execute the built-in function, in response to a determination that the request includes the thread creating indicator, wherein the thread is controlled by the interactive shell process and shares an environment of the interactive shell process. The data processing system declares a variable based on at least one instruction of the built-in function. Finally, the data processing system may access the variable. | 10-01-2009 |
20090249345 | Operating System Fast Run Command - A fast sub-process is provided in an operating system for a digital signal processor (DSP). The fast sub-process executes a sub-process without a kernel first determining whether the sub-process resides in an internal memory, as long as certain conditions have been satisfied. One of the conditions is that a programmer determines that the sub-process has been previously loaded into internal memory and executed. Another condition is that the programmer has ensured that a process calling the sub-process has not called any other sub-process between the last execution and the current execution request. Yet another condition is that the programmer ensures that the system has not called another overlapping sub-process between the last execution and the current execution request. | 10-01-2009 |
20090249346 | IMAGE FORMING APPARATUS, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An image forming apparatus is provided. The image forming apparatus includes: a job reception unit configured, in response to an execution request for a job, to receive a second program in which an insertion process for a first program that executes the job is described or to receive identification information of the second program; an application unit configured to apply the second program to the first program loaded in a memory; and a job execution unit configured to execute the job based on the first program to which the second program is applied. | 10-01-2009 |
20090249347 | VIRTUAL MULTIPROCESSOR, SYSTEM LSI, MOBILE PHONE, AND CONTROL METHOD FOR VIRTUAL MULTIPROCESSOR - A virtual multiprocessor according to the present invention includes: one or more processors that execute programs while switching between the programs at each of assigned times; a scheduling unit that performs scheduling that determines execution sequence of the programs and the one or more processors that are to execute one or more of the programs, wherein the scheduling unit performs the scheduling at a timing dependent on an assigned time associated with a corresponding one of the programs being executed by the one or more processors, in the case where a first mode is set, and performs the scheduling at a timing not dependent on the assigned time so that at least one of the one or more processors does not execute the programs, in the case where a second mode is set. | 10-01-2009 |
20090249348 | METHOD AND APPARATUS FOR OPERATING A THREAD - A method and apparatus for operating a thread are disclosed. The method includes: receiving a thread operation request that carries a thread operation ID and thread information related operation; and operating a thread according to the operation request. The thread in the embodiments of the present disclosure is independent of the actual content. Therefore, the thread file can be operated according to the requirements of the user, and thus the user experience is improved. | 10-01-2009 |
20090254907 | METHOD FOR MULTITHREADING AN APPLICATION USING PARTITIONING TO ALLOCATE WORK TO THREADS - A method for assigning work to a plurality of threads using a primitive data element to partition a work load into a plurality of partitions. A first partition is assigned to a first thread and a second partition is assigned to a second thread of the plurality of threads. A method for improving the concurrency of a multithreaded program by replacing a queue structure storing a plurality of tasks to be performed by a plurality of threads with a partition function. A computer system including a processor unit configured to run a plurality of threads and a system memory coupled to the processor unit that stores a multithreaded program. The multithreaded program workload is partitioned into a plurality of partitions using a primitive data element and a first partition of the plurality of partitions is assigned to a first thread of the plurality of threads for execution. | 10-08-2009 |
20090254908 | CUSTOM SCHEDULING AND CONTROL OF A MULTIFUNCTION PRINTER - A method and system for implementing custom scheduling policies including making alterations to internal task scheduling policies or firmware operating within the MFP throughout the lifetime of the MFP. Internal task scheduling policy alterations can be made either remotely or on-site at a customer location. Custom scheduling policies can be implemented for different periods of time. The MFP includes a task run-time controller to receive and process the internal task scheduling policy alterations. The task run-time controller includes a task tuner, which may implement the internal task scheduling policy alterations responsive to usage characteristics of the MFP. | 10-08-2009 |
20090254909 | Methods and Apparatus for Power-aware Workload Allocation in Performance-managed Computing Environments - An exemplary method of allocating a workload among a set of computing devices includes obtaining at least one efficiency model for each device. The method also includes, for each of a set of allocations of the workload among the devices, determining, for each device, the power consumption for the device to perform the workload allocated to the device by the allocation, the power consumption being determined based on the at least one efficiency model for each device; and determining a total power consumption of the devices. The method also includes selecting an allocation of the workload among the devices based at least in part on the total power consumption of the devices for each allocation. The method also includes implementing the selected allocation of the workload among the devices. | 10-08-2009 |
20090254910 | PRINTING SYSTEM SCHEDULER METHODS AND SYSTEMS - Provided are printing system scheduler methods and systems. Specifically, a shadow scheduler is disclosed which provides alternative modular printing system configurations, relative to a base modular printing system configuration. | 10-08-2009 |
20090254911 | INFORMATION PROCESSING APPARATUS - An information processing apparatus having a storage that stores identification information for identifying an event occurring in a forefront module and completion information for identifying a module having completed the corresponding process, an identifier that identifies an event that any module has not completed the process based on the completion information, an instructor that provides the identification information related to the event identified by the identifier to the forefront module, and instructs the forefront module to execute the process related to the identified event. Each of the modules operates as a determiner that reads the completion information corresponding to the received identification information, and determines whether to skip the process of its own module, and a deliverer that delivers, the identification information to the immediately succeeding module in a case where the determiner determines to skip the process of its own module. | 10-08-2009 |
20090254912 | SYSTEM AND METHOD FOR BUILDING APPLICATIONS, SUCH AS CUSTOMIZED APPLICATIONS FOR MOBILE DEVICES - A system and method for building applications, such as applications that cause a mobile device to perform a task, is described. In some examples, the system provides one or more plugins, a framework for the plugins, and configures the plugins to build a customized application for a mobile device. The plugins may include code configured to perform a task, display one or more pages associated with performance of the task, perform a transaction during performance of the task, and so on. | 10-08-2009 |
20090260012 | Workload Scheduling - Computer-implemented methods, computer program products and systems for a scalable workload scheduling system to accommodate increasing workloads within a heterogeneous distributed computing environment. In one embodiment, a modified average consensus method is used to evenly distribute network traffic and jobs among a plurality of computers. The user establishes a virtual network comprising a logical topology of the computers. State information from each computer is propagated to the rest of the computers by the modified average consensus method, thereby enabling the embodiment to dispense with the need for a master server, by allowing the individual computers to themselves select jobs which optimally match a desired usage of their own resources to the resources required by the jobs. | 10-15-2009 |
20090265711 | PROCESSING OF ELECTRONIC DOCUMENTS TO ACHIEVE MANUFACTURING EFFICIENCY - A method can be used for processing electronic documents, each of which are assigned a plurality of attributes. The documents are sorted into one or more groups based on the attributes, such that the electronic documents of each group share at least one of the attributes. The attributes of the documents in each group are analyzed to determine an appropriate processing site for each group, and then the groups are each routed to their respective processing sites determined to be appropriate therefor. | 10-22-2009 |
20090271791 | SYSTEM AND METHOD FOR PERFORMING TIME-FLEXIBLE CALENDRIC STORAGE OPERATIONS - A system and method are provided for creating a non-standard calendar that may have customized attributes, such as number of days in a month, first day of a month, number of months in a year, first month of a year, number of years, or other customized attributes. Such non-standard calendars may be similar to non-standard calendars used by companies, enterprises or other organizations, such as a fiscal calendar, academic calendar, or other calendar. A storage management system manager may have a database of storage policies that include preferences and frequencies for performing storage operations, and associations with a non-standard calendar. The storage manager can initiate storage operations based on the storage policy using data that may be identified according to selection criteria, and determine a time to perform the storage operation according to a non-standard calendar. | 10-29-2009 |
20090276778 | CONTEXT SWITCHING IN A SCHEDULER - A scheduler in a process of a computer system detects a task with an associated execution context that has not been previously invoked by the scheduler. The scheduler executes the task on a processing resource without performing a context switch if the processing resource executed a previous task to completion. The scheduler stores the execution context originally associated with the task for later use. | 11-05-2009 |
20090276779 | JOB MANAGEMENT APPARATUS - When there is a job activation request accompanied with variable information in which an execution attribute and an identifier of a job are associated, a job definition in which an execution attribute is described with an arbitrary identifier is referred, and based on the variable information, an identifier within the job definition is replaced with the execution attribute to create a job. Then, the job created in this manner is activated. | 11-05-2009 |
20090276780 | Method and apparatus for dynamically processing events based on automatic detection of time conflicts - A scheduling apparatus, system, and article including a machine-accessible medium, along with a method of dynamically processing events, are disclosed. The apparatus may include a receiving module capable of receiving information associated with an event. The information may include an event name and event time. The apparatus may also include a memory capable of storing the information associated with the event, and being communicatively coupled with the receiving module. The memory may be used to store a plurality of schedule items, at least one of which may be associated with an item time. The method may include selecting an event associated with a transaction and event time, determining whether a conflict exists, and adjusting the set of events stored in the memory to include the information associated with the event if no conflict is found. | 11-05-2009 |
20090282411 | SCHEDULING METHOD AND SYSTEM - A scheduling method and system. The method includes receiving, by a computing system, job related data associated with a plurality of jobs to be executed by said computing system, time constraint data, and maximum time shift values associated with the time constraint data. The computing system determines that a start time for execution of a first job of the plurality of jobs should be rescheduled. The computing system receives workload statistics. The computing system determines based on the workload statistics, a first start time for the first job. The computing system compares the time constraint data with the first start time to determine if the first start time is in conflict with the time constraint data. The computing system stores the first start time. | 11-12-2009 |
20090282412 | MULTI-LAYER WORKFLOW ARCHITECTURE - A multi-layer workflow architecture for a print shop is disclosed. The workflow architecture includes a workflow front end, service bus, and service providers. The workflow front end provides an interface to print shop operators. The service providers are each associated with a device in the print shop. The service bus represents the layer between the workflow front end and the service providers. In operation, the service providers report device capabilities for devices to the service bus. The workflow front end receives the device capabilities from the service bus, and provides the device capabilities to a user to allow the user to define a job ticket based on the device capabilities. The service bus identifies the processes defined in the job ticket, and identifies the service providers operable to provide the processes. The service bus then routes process messages to the identified service providers to execute the processes on the devices. | 11-12-2009 |
20090282413 | Scalable Scheduling of Tasks in Heterogeneous Systems - Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for scalable scheduling of tasks in heterogeneous systems is provided. According to one embodiment, the computer implemented method comprises fetching a set of tasks to form a received input, estimating run times of tasks, calculating average estimated completion times of tasks, producing a set of ordered tasks from the received input to form a task list, identifying a machine to be assigned, and assigning an identified task from the task list to an identified machine. | 11-12-2009 |
20090288086 | LOCAL COLLECTIONS OF TASKS IN A SCHEDULER - A scheduler in a process of a computer system includes a local collection of tasks for each processing resource allocated to the scheduler and at least one general collection of tasks. The scheduler assigns each task that becomes unblocked to the local collection corresponding to the processing resource that caused the task to become unblocked. When a processing resource becomes available, the processing resource attempts to execute the most recently added task in the corresponding local collection. If there are no tasks in the corresponding local collection, the available processing resource attempts to execute a task from the general collection. | 11-19-2009 |
20090288087 | SCHEDULING COLLECTIONS IN A SCHEDULER - A scheduler in a process of a computer system includes a respective scheduling collection for each scheduling node in the scheduler. The scheduling collections are mapped into at least a partial search order based on one or more execution metrics. When a processing resource in a scheduling node becomes available, the processing resource first attempts to locate a task to execute in a scheduling collection corresponding to the scheduling node before searching other scheduling collections in an order specified by the search order. | 11-19-2009 |
20090288088 | PARALLEL EFFICIENCY CALCULATION METHOD AND APPARATUS - This invention is to provide a parallel efficiency calculation method, which can be applied, even in a case where a load balance is not kept, to many parallel processings including a heterogeneous computer system environment, and quantitatively correlates a parallel efficiency with a load balance contribution ratio and a virtual parallelization ratio, as parallel performance evaluation indexes, and parallel performance impediment factor contribution ratios. A parallel efficiency E | 11-19-2009 |
20090293060 | METHOD FOR JOB SCHEDULING WITH PREDICTION OF UPCOMING JOB COMBINATIONS - A method for scheduling different combinations of jobs simultaneously running on a shared hardware platform is disclosed. Schedules may be created while executing the current set of jobs, for one or more possible sets of jobs that may occur after a change in the current set of jobs. In at least one embodiment, the present invention may be implemented in a SDR system where the jobs may correspond to radios in the SDR system. The possible combinations of radios that may occur after a change in the set of currently running radios may be determined at run time by adding or removing one radio at a time from the set of currently running radios. | 11-26-2009 |
20090300623 | METHODS AND SYSTEMS FOR ASSIGNING NON-CONTINUAL JOBS TO CANDIDATE PROCESSING NODES IN A STREAM-ORIENTED COMPUTER SYSTEM - A system and method for choosing non-continual jobs to run in a stream-based distributed computer system includes determining a total amount of resources to be consumed by non-continual jobs. A priority threshold is determined above which jobs will be accepted, below which jobs will be rejected. Overall penalties are minimized relative to the priority threshold based on estimated completion times of the jobs. System constraints are applied to ensure that jobs meet set criteria such that a plurality of non-continual jobs are scheduled which consider the system constraints and minimize overall penalties using available resources. | 12-03-2009 |
20090300624 | Tracking data processing in an application carried out on a distributed computing system - Methods, systems, and products are disclosed for tracking data processing in an application carried out on a distributed computing system, the distributed computing system including a plurality of computing nodes connected through a data communications network, the application carried out by a plurality of pluggable processing components installed on the plurality of computing nodes, the pluggable processing components including a pluggable processing provider component and a pluggable processing consumer component, that include: identifying, by the provider component, data satisfying predetermined processing criteria, the criteria specifying that the data is relevant to processing provided by the consumer component; passing, by the provider component, the data to the next pluggable processing component in the application for processing, including maintaining access to the data; receiving, by the consumer component, the data during execution of the application; and sending, by the consumer component, a receipt indicating that the consumer component received the data. | 12-03-2009 |
20090300625 | Managing The Performance Of An Application Carried Out Using A Plurality Of Pluggable Processing Components - Methods, apparatus, and products are disclosed for managing the performance of an application carried out using a plurality of pluggable processing components, the pluggable processing components executed on a plurality of compute nodes, that include: identifying a current configuration of the pluggable processing components for carrying out the application; receiving a plurality of performance indicators produced during execution of the pluggable processing components; and altering the current configuration of the pluggable processing components in dependence upon the performance indicators and one or more additional pluggable processing components. | 12-03-2009 |
20090300626 | Scheduling for Computing Systems With Multiple Levels of Determinism - In a computing system, a method and system for scheduling software process execution and inter-process communication is introduced. Processes or groups of processes are assigned to execute within timeslots of a schedule according to associated execution frequencies, execution durations and inter-process communication requirements. The schedules allow development and test of the processes to be substantially decoupled from one another so that software engineering cycle time can be reduced. | 12-03-2009 |
20090300627 | SCHEDULER FINALIZATION - A runtime environment allows a scheduler in a process of a computer system to be finalized prior to the process completing. The runtime environment causes execution contexts that are inducted into the scheduler and execution contexts created by the scheduler to be tracked. The runtime environment finalizes the scheduler subsequent to each inducted execution context exiting the scheduler and each created execution context being retired by the scheduler. | 12-03-2009 |
20090300628 | LOG QUEUES IN A PROCESS - A logger in a process of a computer system creates a log queue for each execution context and/or processing resource in the process. A log is created in the log queue for each log request and log information associated with the log request is stored into the log. All logs in each log queue except for the most recently added log in each log queue are flushed prior to the process completing. | 12-03-2009 |
20090300629 | Scheduling of Multiple Tasks in a System Including Multiple Computing Elements - A method for controlling parallel process flow in a system including a central processing unit (CPU) attached to and accessing system memory, and multiple computing elements. The computing elements (CEs) each include a computational core, local memory and a local direct memory access (DMA) unit. The CPU stores in the system memory multiple task queues in a one-to-one correspondence with the computing elements. Each task queue, which includes multiple task descriptors, specifies a sequence of tasks for execution by the corresponding computing element. Upon programming the computing element with task queue information of the task queue, the task descriptors of the task queue in system memory are accessed. The task descriptors of the task queue are stored in the local memory of the computing element. The accessing and the storing of the data by the CEs is performed using the local DMA unit. When the tasks of the task queue are executed by the computing element, the execution is typically performed in parallel by at least two of the computing elements. The CPU is interrupted respectively by the computing elements only upon their fully executing the tasks of their respective task queues. | 12-03-2009 |
20090300630 | WAITING BASED ON A TASK GROUP - A method includes creating a first task group. A plurality of task object representations are added to the first task group. Each representation corresponds to one task object in a first plurality of task objects. A wait operation is performed on the first task group that waits for at least one of the task objects in the first plurality of task objects to complete. | 12-03-2009 |
20090307696 | THREAD MANAGEMENT BASED ON DEVICE POWER STATE - Managing threads for executing on a computing device based on a power state of the computing device. A power priority value corresponding to each of the threads is compared to a threshold value associated with the power state. The threads having an assigned power priority value that violates the threshold value are suspended from executing, while the remaining threads are scheduled for execution. When the power state of the computing device changes, the threads are re-evaluated for suspension or execution. In an embodiment, the threads on a mobile computing device are managed to maintain the processor in a low power state to reduce power consumption. | 12-10-2009 |
20090307697 | Method and Apparatus for Efficient Gathering of Information in a Multicore System - Methods and apparatus for gathering information from processors by using compressive sampling are presented. The invention can monitor multicore processor performance and schedule processor tasks to optimize processor performance. Using compressive sampling minimizes processor-memory bus usage by the performance monitoring function. An embodiment of the invention is a method of gathering information from a processor, the method comprising compressive sampling of information from at least one processor core. The compressive sampling produces compressed information. The processor comprises the at least one processor core, and the at least one processor core is operative to process data. | 12-10-2009 |
20090307698 | INFORMATION HANDLING SYSTEM POWER MANAGEMENT DEVICE AND METHODS THEREOF - An information handling system includes a set of power and performance profiles. Based on which of the profiles has been selected, the information handling system selects a thread scheduling table for provision to an operating system. The thread scheduling table determines the sequence of processor cores at which program threads are scheduled for execution. In a power-savings mode, the corresponding thread scheduling table provides for threads to be concentrated at subset of available processor cores, increasing the frequency with which the information handling system can place unused processors in a reduced power state. | 12-10-2009 |
20090307699 | APPLICATION PROGRAMMING INTERFACES FOR DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS - A method and an apparatus for a parallel computing program calling APIs (application programming interfaces) in a host processor to perform a data processing task in parallel among compute units are described. The compute units are coupled to the host processor including central processing units (CPUs) and graphic processing units (GPUs). A program object corresponding to a source code for the data processing task is generated in a memory coupled to the host processor according to the API calls. Executable codes for the compute units are generated from the program object according to the API calls to be loaded for concurrent execution among the compute units to perform the data processing task. | 12-10-2009 |
20090313629 | Task processing system and task processing method - Provided are a task processing system and a task processing method that can reduce power consumption and prevent overhead or processing load from increasing even with a system which performs frequency switching frequently. A main processor determines at least one of tasks to be executed by a sub processor in each of a plurality of time segment each having a predetermined length and determines, by the end of an nth (n is an integer that satisfies n≧1) time segment, a clock frequency necessary for executing the task within an (n+1)th time segment based on information of a required number of cycles for the task to be executed by the sub processor in the (n+1)th time segment. The clock generation/control circuit supplies, in the (n+1)th time segment, to the sub processor a clock signal according to the clock frequency determined by the main processor in the nth time segment. | 12-17-2009 |
20090313630 | COMPUTER PROGRAM, APPARATUS, AND METHOD FOR SOFTWARE MODIFICATION MANAGEMENT - A software modification management program is executed by a computer, whereby, when input with modification data, a modification application scheduled node decision unit generates a modification application scheduled node list. A modification applicable node selection unit successively extracts the node IDs of nodes which are not executing a job, from the modification application scheduled node list to set the extracted node IDs as modification applicable node IDs until the value of a modification-in-progress node counter indicating the number of nodes to which software modification is being applied reaches a predetermined upper limit value. A service management unit stops the service of nodes corresponding to the modification applicable node IDs. In accordance with the modification data, a modification unit modifies target software installed on the nodes whose service has been stopped. | 12-17-2009 |
20090320027 | FENCE ELISION FOR WORK STEALING - Methods and systems for statistically eliding fences in a work stealing algorithm are disclosed. A data structure comprising a head pointer, tail pointer, barrier pointer and an advertising flag allows for dynamic load-balancing across processing resources in computer applications. | 12-24-2009 |
20090320028 | SYSTEM AND METHOD FOR LOAD-ADAPTIVE MUTUAL EXCLUSION WITH WAITING PROCESS COUNTS - A system and associated method for mutually exclusively executing a critical section by a process in a computer system. The critical section accessing a shared resource is controlled by a lock. The method measures a detection time when a lock contention is detected, a wait time representing a duration of wait for the lock at each failed attempt to acquire the lock, and a delay representing a total lapse of time from the detection time till the lock is acquired. The delay is logged and used to calculate an average delay, which is compared with a suspension overhead time of the computer system to determine whether to spin or to suspend the process while waiting for the lock to be released. The number of processes waiting for the lock and the number of processes suspended are respectively counted to optimize the method. | 12-24-2009 |
20090320029 | DATA PROTECTION SCHEDULING, SUCH AS PROVIDING A FLEXIBLE BACKUP WINDOW IN A DATA PROTECTION SYSTEM - A data protection scheduling system provides a flexible or rolling data protection window that analyzes various criteria to determine an optimal or near optimal time for performing data protection or secondary copy operations. While prior systems may have scheduled backups at an exact time (e.g., 2:00 a.m.), the system described herein dynamically determines when to perform the backups and other data protection storage operations, such as based on network load, CPU load, expected duration of the storage operation, rate of change of user activities, frequency of use of affected computer systems, trends, and so on. | 12-24-2009 |
20090320030 | METHOD FOR MANAGEMENT OF TIMEOUTS - A method of managing a multithreaded computer system comprises instantiating, in response to each transaction initiated by a first thread of a plurality of threads, a timer object including a scheduled expiration time and a set of timeout handling information for the transaction in storage local to the first thread; registering, in response to each passing of a fixed time interval, each timer object in the storage local to the first thread for which the scheduled expiration time is earlier than the fixed time interval added to a current time in a timer processing component by adding a pointer referencing the timer object to a data structure managed by the timer processing component; and managing each timer object corresponding to a transaction initiated by the first thread that is not registered in the timer processing component in the storage local to the first thread. The timer processing component regularly processes each timer object referenced by the data structure for which the scheduled expiration time value is not earlier than the current time in accordance with the set of timeout handling information of the timer object. | 12-24-2009 |
20090320031 | Power state-aware thread scheduling mechanism - A system filter is maintained to track which single-thread cores [or which multi-threaded logical CPUs] are in a low-latency power state. For at least one embodiment, low-latency power states include an active C | 12-24-2009 |
20090328045 | TECHNIQUE FOR FINDING RELAXED MEMORY MODEL VULNERABILITIES - A system and method capable of finding relaxed memory-model vulnerabilities in a computer program caused by running on a machine having a relaxed memory model. A relaxed memory model vulnerability in a computer program includes the presence of program executions that are not sequentially consistent. In one embodiment, non-sequentially consistent executions are detected by exploring sequentially consistent executions. | 12-31-2009 |
20090328046 | METHOD FOR STAGE-BASED COST ANALYSIS FOR TASK SCHEDULING - One embodiment may estimate the processing time of tasks requested by an application by maintaining a state-model for the application. The state model may include states that represent the tasks requested by the application, with each state including the average run-time of each task. In another embodiment, a state model may estimate which task is likely to be requested for processing after the current task is completed by providing edges in the state model connecting the states. Each edge in the state model may track the number of times the application transitions from one task to the next. Over time, data may be gathered representing the percentage of time that each edge is from a state node. Given this information, the scheduler may estimate the CPU cost of the next task based on the current state, the most likely transition, and the cost of the predicted next task. The state model may also track multiple users of the application and modify or create the state model as the users traverse through the state model. | 12-31-2009 |
20090328047 | DEVICE, SYSTEM, AND METHOD OF EXECUTING MULTITHREADED APPLICATIONS - Device, system, and method of executing multithreaded applications. Some embodiments include a task scheduler to receive application information related to one or more parameters of at least one multithreaded application to be executed by a multi-core processor including a plurality of cores and, based on the application information and based on architecture information related to an arrangement of the plurality of cores, to assign one or more tasks of the multithreaded application to one or more cores of the plurality of cores. Other embodiments are described and claimed. | 12-31-2009 |
20090328048 | Distributed Processing Architecture With Scalable Processing Layers - The present invention is a system on chip architecture having scalable, distributed processing and memory capabilities through a plurality of processing layers. In a preferred embodiment, a distributed processing layer processor comprises a plurality of processing layers, a processing layer controller, and a central direct memory access controller. The processing layer controller manages the scheduling of tasks and distribution of processing tasks to each processing layer. Within each processing layer, a plurality of pipelined processing units (PUs), specially designed for conducting a defined set of processing tasks, are in communication with a plurality of program memories and data memories. One application of the present invention is in a media gateway that is designed to enable the communication of media across circuit switched and packet switched networks. The hardware system architecture of the said novel gateway is comprised of a plurality of DPLPs, referred to as Media Engines that are interconnected with a Host Processor or Packet Engine, which, in turn, is in communication with interfaces to networks. Each of the PUs within the processing layers of the Media Engines are specially designed to perform a class of media processing specific tasks, such as line echo cancellation, encoding or decoding data, or tone signaling. | 12-31-2009 |
20090328049 | INFORMATION PROCESSING APPARATUS, GRANULARITY ADJUSTMENT METHOD AND PROGRAM - According to one embodiment, an information processing apparatus includes a plurality of execution modules and a scheduler which controls assignment of a plurality of basic modules to the plurality of execution modules. The scheduler includes assigning, when an available execution module which is not assigned any basic modules exists, a basic module which stands by for completion of execution of other basic module to the available execution module, measuring an execution time of processing of the basic module itself, measuring execution time of processing for assigning the basic module to the execution module, and performing granularity adjustment by linking two or more basic modules to be successively executed according to the restriction of a execution sequence so as to be assigned as one set to the execution module and redividing the linked two or more basic modules, based on the two execution measured execution times. | 12-31-2009 |
20100005468 | BLACK-BOX PERFORMANCE CONTROL FOR HIGH-VOLUME THROUGHPUT-CENTRIC SYSTEMS - Throughput of a high-volume throughput-centric computer system is controlled by dynamically adjusting a concurrency level of a plurality of events being processed in a computer system to meet a predetermined target for utilization of one or more resources of a computer system. The predetermined target is less than 100% utilization of said one or more resources. The adjusted concurrency level is validated using one or more queuing models to check that said predetermined target is being met. Parameters are configured for adjusting the concurrency level. The parameters are configured so that said one or more resources are shared with one or more external programs. A statistical algorithm is established that minimizes total number of samples collected. The samples may be used to measure performance used to further dynamically adjust the concurrency level. A dynamic thread sleeping method is designed to handle systems that need only a very small number of threads to saturate bottleneck resources and hence are sensitive to concurrency level changes. | 01-07-2010 |
20100005469 | Method and System for Defining One Flow Models with Varied Abstractions for Scalable lean Implementations - A method and system for representing one or more families of existing processes in a composite abstraction such that process improvement techniques can be implemented in a more scalable manner. The invention enables abstracting a set of pre-defined process models into a composite model that represents sufficient operational details while being compliant with process improvement techniques such as, but not limited to, Lean Six Sigma, Kaizen, and others (collectively “lean” techniques). The invention provides the ability to flexibly represent the operational and lean-related information in varied abstraction levels at different stages of the process as and when necessary. The invention provides the ability to dynamically generate and represent process models based on user-selected defining characteristics (or attributes) used for process “family” formation. This allows users to define process models based on a set of customized attributes deemed critical by that particular user, including the ability to prioritize the selected attributes. | 01-07-2010 |
20100017804 | Thread-to-Processor Assignment Based on Affinity Identifiers - For each thread of a computer program to be executed on a multiple-processor computer system, an affinity identifier is associated to the thread by the computer program. The affinity identifiers of the threads denote how closely related the threads are. For each thread, a processor of the multiple-processor computer system on which the thread is to be executed is selected based on the affinity identifiers of the threads, by an operating system being executed on the multiple-processor computer system and in relation to which the computer programs are to be executed. Each thread is then executed by the processor selected for the thread. | 01-21-2010 |
20100017805 | DATA PROCESSING APPARATUS, METHOD FOR CONTROLLING DATA PROCESSING APPARATUS,AND COMPUTER-READABLE STORAGE MEDIUM - When a plurality of jobs are processed using a plurality of data processing units, data formats of the jobs to be processed can be determined to distribute a data processing load of the data processing units. A method for controlling a data processing apparatus for causing a plurality of data processing units to process data of a job includes storing data of a first job in a storing unit in first and second data formats, determining whether to process the stored data of the first job in the first or second data format, and causing the plurality of data processing units to process the data in the determined data format. The determination is made based on whether processing of data of a second job by the first or second processing unit requires longer time. | 01-21-2010 |
20100023946 | USER-LEVEL READ-COPY UPDATE THAT DOES NOT REQUIRE DISABLING PREEMPTION OR SIGNAL HANDLING - A user-level read-copy update (RCU) technique. A user-level RCU subsystem executes within threads of a user-level multithreaded application. The multithreaded application may include reader threads that read RCU-protected data elements in a shared memory and updater threads that update such data elements. The reader and updater threads may be preemptible and comprise signal handlers that process signals. Reader registration and unregistration components in the RCU subsystem respectively register and unregister the reader threads for RCU critical section processing. These operations are performed while the reader threads remain preemptible and with their signal handlers being operational. A grace period detection component in the RCU subsystem considers a registration status of the reader threads and determines when it is safe to perform RCU second-phase update processing to remove stale versions of updated data elements that are being referenced by the reader threads, or take other RCU second-phase update processing actions. | 01-28-2010 |
20100031262 | Program Schedule Sub-Project Network and Calendar Merge - A master project file and one or more sub-project files are merged to form a merged master project file while avoiding date shifting, pointers to external files, accommodating equally named resources and calendars and accommodating split tasks which may otherwise be caused by differing settings or defaults for files created or modified on different processors or other incompatibility between the master project file and sub-project files by copying data reconstructed from the original settings, defaults and the like of the original sub-project file to descriptive fields in the merged master project file to resolve settings which must match between the master project file and the sub-project file while altering names of tasks or files as necessary and validating merged task data against the copied data. | 02-04-2010 |
20100031263 | PROCESS MODEL LEAN NOTATION - A process model lean notation provides an easy to understand way to categorize the process elements of a process using a process definition grammar. Process model lean notation allows an organization to rapidly identify the process elements of a process and the interactions between the process elements, and produces a process categorization that includes an ordered sequence of the process elements. A process categorization provides a structured presentation of the process elements and clearly indicates for each process element the task accomplished, the actor responsible for and/or performing the task, the tool that may be used to perform the task, and the work product that may result by performing the task. | 02-04-2010 |
20100031264 | MANAGEMENT APPARATUS AND METHOD FOR CONTROLLING THE SAME - A management apparatus for managing a production apparatus that executes a plurality of processes in accordance with a production plan detects an amount of a release-forgotten memory area that is kept allocated on a memory of the production apparatus by each process even after completion of the process. The management apparatus determines an amount of remaining memory based on the detected amount of the release-forgotten memory area and retrieves a process executable with the amount of remaining memory. The management apparatus determines a process to be executed next in accordance with either a result of the retrieval executed based on the detected amount of the release-forgotten memory area or the production plan and controls the production apparatus to execute the determined process. | 02-04-2010 |
20100037225 | WORKLOAD ROUTING BASED ON GREENNESS CONDITIONS - Workload requests are routed in response to server greenness conditions. A workload request is received for a remotely invocable computing service executing separately in different remotely and geographically dispersed host computing servers. Greenness conditions pertaining to production or conservation of energy based upon external factors for each of the different remotely and geographically dispersed host computing servers are determined. The workload request is routed to one of the different remotely and geographically dispersed host computing servers based upon the determined greenness conditions. | 02-11-2010 |
20100037226 | GROUPING AND DISPATCHING SCANS IN CACHE - A method, system, and computer program product for grouping and dispatching scans in a cache directory of a processing environment is provided. A plurality of scan tasks is aggregated from a scan wait queue into a scan task queue. The plurality of scan tasks is determined by selecting one of (1) each of the plurality of scan tasks on the scan wait queue, (2) a predetermined number of the plurality of scan tasks on the scan wait queue, and (3) a set of scan tasks of a similar type on the scan wait queue. A first scan task from the plurality of scan tasks is selected from the scan task queue. The scan task is performed. | 02-11-2010 |
20100037227 | METHOD FOR DIGITAL PHOTO FRAME TASK SCHEDULE - A method for executing a task schedule on a DPF is disclosed. The method includes loading a task configuration file comprising at least one task capable of being executed at any given time, reading a current time from a clock within the DPF, checking if there is the task waiting to be executed, executing the task if there exists the task waiting to be run, and repeating the reading a current time step, after a wait time, if no tasks have been scheduled for current execution. | 02-11-2010 |
20100043000 | High Accuracy Timer - Technologies for a high-accuracy timer in a tasked-based, multi-processor computing system without using dedicated hardware timer resources. | 02-18-2010 |
20100043001 | METHOD FOR CREATING AN OPTIMIZED FLOWCHART FOR A TIME-CONTROLLED DISTRIBUTION COMPUTER SYSTEM - A method is described and presented for creation of an optimized schedule (P) for execution of a functionality by means of a time-controlled distributed computer system, in which the distributed computer system and the functionality have a set of (especially structural and functional) elements (e | 02-18-2010 |
20100043002 | WORKFLOW HANDLING APPARATUS, WORKFLOW HANDLING METHOD AND IMAGE FORMING APPARATUS - A workflow handling apparatus includes an activity storage unit which stores various activities forming a workflow, a workflow configuration storage unit which stores information about an existing workflow including each of the activities, an information type storage unit which stores data on an information type used in the existing workflow, a request storage unit which stores a new workflow created on the basis of a processing request to the workflow handling apparatus, the new workflow being connected with a processing corresponding to the various activities, an information type determination unit which determines an information type used in the new workflow, a determination unit which determines a degree of similarity between the information type used in the new workflow and the information type used in the existing workflow, and a workflow extraction unit which extracts an existing workflow having the degree of similarity equal to or greater than a predetermined value. | 02-18-2010 |
20100050177 | Method and apparatus for content based searching - The scheduling of multiple request to be processed by a number of deterministic finite automata-based graph thread engine (DTE) workstations is processed by a novel scheduler. The scheduler may select an entry from an instruction in a content search apparatus. Using attribute information from the selected entry, the scheduler may thereafter analyze a dynamic scheduling table to obtain placement information. The scheduler may determine an assignment of the entry, using the placement information, that may limit cache thrashing and head of line blocking occurrences. Each DTE workstation may including normalization capabilities. Additionally, the content searching apparatus may employ an address memory scheme that may prevent memory bottle neck issues. | 02-25-2010 |
20100058346 | Assigning Threads and Data of Computer Program within Processor Having Hardware Locality Groups - A computer program having threads and data is assigned to a processor having a processor cores and memory organized over hardware locality groups. The computer program is profiled to generate a data thread interaction graph (DTIG) representing the computer program. The threads and the data of the computer program are organized over clusters using the DTIG and based on one or more constraints. The DTIG is displayed to a user, and the user is permitted to modify the constraints such that the threads and the data of the computer program are reorganized over the clusters. Each cluster is mapped onto one of the hardware locality groups. The computer program is regenerated based on the mappings of clusters to hardware locality groups. At run-time, optimizations are performed to improve execution performance, while the computer program is executed. | 03-04-2010 |
20100064286 | DATA AFFINITY BASED SCHEME FOR MAPPING CONNECTIONS TO CPUS IN I/O ADAPTER - A method, system and computer program product is disclosed for scheduling data packets in a multi-processor system comprising a plurality of processor units and a multitude of multicast groups. The method comprises associating one of the processor units with each of the multicast groups, receiving a multitude of data packets from the multicast groups, and scheduling all of the data packets received from each of the multicast groups for processing by the one of the processor units associated with said each of the multicast groups. In one embodiment, scheduling is based on affinity of both transmit and received processing for multiple connections to a processor unit. In another embodiment, a system call is provided for transmitting the same data over multiple sockets. Additional system calls may be used for building multicast group socket lists. | 03-11-2010 |
20100064287 | Scheduling control within a data processing system - A processor | 03-11-2010 |
20100064288 | IMAGE PROCESSING APPARATUS, APPLICATION STARTUP MANAGEMENT METHOD, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR - An image processing apparatus that enables to start an application that is required to start by stopping an application that is not used by a user to reserve an available memory capacity when receiving a startup request. A first determination unit determines a memory usage of the application that receives the startup instruction. A second determination unit determines an available memory capacity. A third determination unit determines an application that is not used by a user. A stopping unit stops the application determined by the third determination unit among the executing applications when the memory usage determined by the first determination unit is more than the available memory capacity determined by the second determination unit. A starting unit starts the application that receives the startup instruction using the available memory capacity that increases because the stopping unit stops the determined application. | 03-11-2010 |
20100064289 | INFORMATION PROCESSING METHOD, APPARATUS, AND SYSTEM FOR CONTROLLING COMPUTER RESOURCES, CONTROL METHOD THEREFOR, STORAGE MEDIUM, AND PROGRAM - An operation request from a process or OS for computer resource(s) managed by the OS, such as a file, network, storage device, display screen, or external device, is trapped before access to the computer resource. It is determined whether an access right for the computer resource designated by the trapped operation request is present. If the access right is present, the operation request is transferred to the operating system, and a result from the OS is returned to the request source process. If no access right is present, the operation request is denied, or the request is granted by charging in accordance with the contents of the computer resource. | 03-11-2010 |
20100070975 | DETERMINING THE PROCESSING ORDER OF A PLURALITY OF EVENTS - A method for operating a multi-threading computational system includes: identifying related events; allocating the related events to a first thread; allocating unrelated events to one or more second threads; wherein the events allocated to the first thread are executed in sequence and the events allocated to the one or more second threads are executed in parallel to execution of the first thread. A system for allocating incoming events among operational groups to create a multi-treaded computation process includes: incoming events; an event processing system configured to receive the incoming events; an event key generator within the event processing system, the event key generator being configured to generate event keys at run time, the event keys being associated with the incoming events; and a thread dispatcher, the thread dispatcher allocating the incoming events among the operational groups according to the associated incoming event keys. | 03-18-2010 |
20100070976 | METHOD FOR CONTROLLING IMAGE PROCESSING APPARATUS WITH WHICH JOB EXECUTION HISTORY IS READILY CHECKED, AND RECORDING MEDIUM - Whether a job execution instruction has been issued or not is determined. When it is determined that the job execution instruction has been issued, a job ID is issued. Contents of the job in accordance with the job execution instruction are checked. Then, whether the job is a Scan to USB memory job in an emulation mode or not is determined. When it is determined that the job is the Scan to USB memory job in the emulation mode, a sub job ID brought in correspondence with the issued job ID is issued. | 03-18-2010 |
20100083258 | SCHEDULING EXECUTION CONTEXTS WITH CRITICAL REGIONS - A scheduler in a process of a computer system detects an execution context that blocked from outside of the scheduler while in a critical region. The scheduler ensures that the execution context resumes execution on the processing resource of the scheduler on which the execution context blocked when the execution context becomes unblocked. The scheduler also prevents another execution context from entering a critical region on the processing resource prior to the blocked execution context becoming unblocked and exiting the critical region. | 04-01-2010 |
20100083259 | DIRECTING DATA UNITS TO A CORE SUPPORTING TASKS - A computer system may comprise a plurality of cores that may process the tasks determined by the operating system. A network device may direct a first set of packets to a first core using a flow-spreading technique such as receive side scaling (RSS). However, the operating system may re-provision a task from the first core to a second core to balance the load, for example, on the computer system. The operating system may determine an identifier of the second core using a new data field in the socket calls to track the identifier of the second core. The operating system may provide the identifier of the second core to a network device. The network device may then direct a second set of packets to the second core using the identifier of the second core. | 04-01-2010 |
20100083260 | METHODS AND SYSTEMS TO PERFORM A COMPUTER TASK IN A REDUCED POWER CONSUMPTION STATE - Methods and systems to perform a computer task in a reduced power consumption state, including to virtualize physical resources with respect to an operating environment and service environment, to exit the operating environment and enter the service environment, to place a first set of one or more of the physical resources in a reduced power consumption state, and to perform a task in the service environment utilizing a processor and a second set of one or more of the physical resources. A physical resource may be assigned to an operating environment upon an initialization of the operating environment, and re-assigned to the service environment to be utilized by the service environment while other physical resources are placed in a reduced power consumption state. | 04-01-2010 |
20100083261 | INTELLIGENT CONTEXT MIGRATION FOR USER MODE SCHEDULING - Embodiments for performing directed switches between user mode schedulable (UMS) thread and primary threads are disclosed. In accordance with one embodiment, a primary thread user portion is switched to a UMS thread user portion so that the UMS thread user portion is executed in user mode via the primary thread user portion. The primary thread is then transferred into kernel mode via an implicit switch. A kernel portion of the UMS thread is then executed in kernel mode using the context information of a primary thread kernel portion. | 04-01-2010 |
20100083262 | Scheduling Requesters Of A Shared Storage Resource - To schedule workloads of requesters of a shared storage resource, a scheduler specifies relative fairness for the requesters of the shared storage resource. In response to the workloads of the requesters, the scheduler modifies performance of the scheduler to deviate from the specified relative fairness to improve input/output (I/O) efficiency in processing the workloads at the shared storage resource. | 04-01-2010 |
20100083263 | RESOURCE INFORMATION COLLECTING DEVICE, RESOURCE INFORMATION COLLECTING METHOD, PROGRAM, AND COLLECTION SCHEDULE GENERATING DEVICE - A condition storage unit ( | 04-01-2010 |
20100088704 | META-SCHEDULER WITH META-CONTEXTS - A process in a computer system creates and uses a meta-scheduler with meta-contexts that execute on meta-virtual processors. The meta-scheduler includes a set of schedulers with scheduler-contexts that execute on virtual processors. The meta-scheduler schedules the scheduler-contexts on the meta-contexts and schedules the meta-contexts on the meta-virtual processors which execute on execution contexts associated with hardware threads. | 04-08-2010 |
20100088705 | Call Stack Protection - Call stack protection, including executing at least one application program on the one or more computer processors, including initializing threads of execution, each thread having a call stack, each call stack characterized by a separate guard area defining a maximum extent of the call stack, dispatching one of the threads of the process, including loading a guard area specification for the dispatched thread's call stack guard area from thread context storage into address comparison registers of a processor; determining by use of address comparison logic in dependence upon a guard area specification for the dispatched thread whether each access of memory by the dispatched thread is a precluded access of memory in the dispatched thread's call stack's guard area; and effecting by the address comparison logic an address comparison interrupt for each access of memory that is a precluded access of memory in the dispatched thread's guard area. | 04-08-2010 |
20100107166 | SCHEDULER FOR PROCESSOR CORES AND METHODS THEREOF - A data processing device assigns tasks to processor cores in a more distributed fashion. In one embodiment, the data processing device can schedule tasks for execution amongst the processor cores in a pseudo-random fashion. In another embodiment, the processor core can schedule tasks for execution amongst the processor cores based on the relative amount of historical utilization of each processor core. In either case, the effects of bias temperature instability (BTI) resulting from task execution are distributed among the processor cores in a more equal fashion than if tasks are scheduled according to a fixed order. Accordingly, the useful lifetime of the processor unit can be extended. | 04-29-2010 |
20100107167 | MULTI-CORE SOC SYNCHRONIZATION COMPONENT - The present invention discloses a multi-core SOC synchronization component, which comprises a key administration module, a thread schedule unit supporting data synchronization and thread administration, and an expansion unit serving to expand the memory capacity of the key administration module. The present invention can improve interconnect traffic and prevents from interconnect blocking. The present invention can function as a standard interface of different components. Thus, the present invention can solve the synchronization problem and effectively accelerate product design. | 04-29-2010 |
20100115521 | MEDIATION SERVER, TERMINALS AND DISTRIBUTED PROCESSING METHOD - A highly convenient data processing technique is provided. | 05-06-2010 |
20100122255 | ESTABLISHING FUTURE START TIMES FOR JOBS TO BE EXECUTED IN A MULTI-CLUSTER ENVIRONMENT - Start times are determined for jobs to be executed in the future in a multi-cluster environment. The start times are, for instance, the earliest start times in which the jobs may be executed. The start times are computed in logarithmic time, providing processing efficiencies for the multi-cluster environment. Processing efficiencies are further realized by employing parallel processing in determining the start times. | 05-13-2010 |
20100122256 | Scheduling Work in a Multi-Node Computer System Based on Checkpoint Characteristics - Efficient application checkpointing uses checkpointing characteristics of a job to determine how to schedule jobs for execution on a multi-node computer system. A checkpoint profile in the job description includes information on the expected frequency and duration of a check point cycle for the application. The checkpoint profile may be based on a user/administrator input as well as historical information. The job scheduler will attempt to group applications (jobs) that have the same checkpoint profile, on the same nodes or group of nodes. Additionally, the job scheduler may control when new jobs start based on when the next checkpoint cycle(s) are expected. The checkpoint monitor will monitor the checkpoint cycles, updating the checkpoint profiles of running jobs. The checkpoint monitor will also keep track of an overall system checkpoint profile to determine the available checkpointing capacity before scheduling jobs on the cluster. | 05-13-2010 |
20100122257 | ELECTRONIC DEVICE AND ELECTRONIC DEVICE CONTROL METHOD - When a request is made for execution of a new application while other application is being executed or interrupted, and a judgment unit ( | 05-13-2010 |
20100122258 | VERSIONING AND EFFECTIVITY DATES FOR ORCHESTRATION BUSINESS PROCESS DESIGN - Particular embodiments generally relate to the orchestration of an order fulfillment business process using effectivity dates and versioning. In one embodiment, a plurality of services in the order fulfillment business process are provided. A definition of a business process including one or more services is received from an interface. The one or more services may be defined in steps to be performed in the order fulfillment business process. An effectivity date associated with the definition is also received from the interface. For example, the effectivity date may be associated with the business process or individual steps in the business process and may specify a period of time during which the process or step can be used. The effectivity dates and versioning may then be enforced at run-time. | 05-13-2010 |
20100122259 | Multithreaded kernel for graphics processing unit - Systems and methods are provided for scheduling the processing of a coprocessor whereby applications can submit tasks to a scheduler, and the scheduler can determine how much processing each application is entitled to as well as an order for processing. In connection with this process, tasks that require processing can be stored in physical memory or in virtual memory that is managed by a memory manager. The invention also provides various techniques of determining whether a particular task is ready for processing. A “run list” may be employed to ensure that the coprocessor does not waste time between tasks or after an interruption. The invention also provides techniques for ensuring the security of a computer system, by not allowing applications to modify portions of memory that are integral to maintaining the proper functioning of system operations. | 05-13-2010 |
20100125847 | JOB MANAGING DEVICE, JOB MANAGING METHOD AND JOB MANAGING PROGRAM - A job managing device distributes jobs to be processed to a plurality calculation devices. The job managing device includes an information obtaining unit that obtains at least one of characteristic information or load information of the plurality of calculation devices, a job size determining unit that determines a job size to be allocated to each of the plurality of calculation devices based on the information obtained by the information obtaining unit, a job dividing unit that divides a job to be processed into divided jobs based on the job sizes determined by the job size determining unit, and a job distributing unit that distributes the divided jobs to the plurality of calculation devices. | 05-20-2010 |
20100131954 | ELECTRONIC DEVICE AND CONTROL METHOD THEREOF, DEVICE AND CONTROL METHOD THEREOF, INFORMATION PROCESSING APPARATUS AND DISPLAY CONTROL METHOD THEREOF, IMAGE FORMING APPARATUS AND OPERATION METHOD THEREOF, AND PROGRAM AND STORAGE MEDIUM - In a device having a capability of using time data acquired from an external time information generator, a notification unit notifies a user of time information. The notification unit also notifies the user whether the notified time information is based on time data acquired from the external time information generator. Processing performed by the device is restricted depending on a status associated with time information. Although some types of processing are allowed when the device is in a status in which the time information is based on the time data acquired from the external time information generator, the same type of processing are disabled when the device is in any other status associated with time information. | 05-27-2010 |
20100138837 | ENERGY BASED TIME SCHEDULER FOR PARALLEL COMPUTING SYSTEM - A system, computer readable medium and method for reducing an energy consumption in a parallel computing system that includes plural resources. The method includes receiving a computing job to be performed by the parallel computing system, determining a number of resources of the plural resources to be used for performing the computing job by searching a preset table stored in the parallel computing system, wherein the preset table is populated prior to determining the number of resources, and distributing the computing job to the determined number of resources. | 06-03-2010 |
20100138838 | METHOD FOR EXECUTING SCHEDULED TASK - A scheduled task executing method is used in a computer system and a peripheral device. The computer system has a time generator for generating time information and a memory. When the computer system is in a working state, a user input interface is provided, a scheduled time is set via the user input interface, and the scheduled time is automatically stored in the memory. When the computer system is in a power off state, electricity is continuously supplied to the time generator and the memory. If the time information and the scheduled time comply with a specified relation, a power control signal is generated. In response to the power control signal, the computer is switched from the power off state to the working state. When the computer system is in the working status, the peripheral device is activated to execute a scheduled task item corresponding to the scheduled time. | 06-03-2010 |
20100138839 | MULTIPROCESSING SYSTEM AND METHOD - A multiprocessing system executes a plurality of processes concurrently. A process execution circuit ( | 06-03-2010 |
20100153954 | Apparatus and Methods for Adaptive Thread Scheduling on Asymmetric Multiprocessor - Techniques for adaptive thread scheduling on a plurality of cores for reducing system energy are described. In one embodiment, a thread scheduler receives leakage current information associated with the plurality of cores. The leakage current information is employed to schedule a thread on one of the plurality of cores to reduce system energy usage. On chip calibration of the sensors is also described. | 06-17-2010 |
20100153955 | SAVING PROGRAM EXECUTION STATE - Techniques are described for managing distributed execution of programs. In at least some situations, the techniques include decomposing or otherwise separating the execution of a program into multiple distinct execution jobs that may each be executed on a distinct computing node, such as in a parallel manner with each execution job using a distinct subset of input data for the program. In addition, the techniques may include temporarily terminating and later resuming execution of at least some execution jobs, such as by persistently storing an intermediate state of the partial execution of an execution job, and later retrieving and using the stored intermediate state to resume execution of the execution job from the intermediate state. Furthermore, the techniques may be used in conjunction with a distributed program execution service that executes multiple programs on behalf of multiple customers or other users of the service. | 06-17-2010 |
20100153956 | Multicore Processor And Method Of Use That Configures Core Functions Based On Executing Instructions - A multiprocessor system having plural heterogeneous processing units schedules instruction sets for execution on a selected of the processing units by matching workload processing characteristics of processing units and the instruction sets. To establish an instruction set's processing characteristics, the homogeneous instruction set is executed on each of the plural processing units with one or more performance metrics tracked at each of the processing units to determine which processing unit most efficiently executes the instruction set. Instruction set workload processing characteristics are stored for reference in scheduling subsequent execution of the instruction set. | 06-17-2010 |
20100153957 | SYSTEM AND METHOD FOR MANAGING THREAD USE IN A THREAD POOL - A method and system for managing a thread pool of a plurality of first type threads and a plurality of second type threads in a computer system using a thread manager, specifically, a method for prioritizing, cancelling, balancing the work load between first type threads and second type threads, and avoiding deadlocks in the thread pool. A queue stores a first type task and a second type task, the second type task being executable by at least one of the plurality of second type threads. The availability of at least one of the plurality of first type threads is determined, and if none are available, the availability of at least one of the plurality of second type threads is determined. An available second type thread is selected to execute the first type task. | 06-17-2010 |
20100162251 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR CLASSIFYING PROBLEM QUERIES TO REDUCE EXCEPTION PROCESSING - A system, method, and computer-readable medium that facilitate classification of database requests as problematic based on estimated processing characteristics of the request are provided. Estimated processing characteristics may include estimated skew including central processing unit skew and input/output operation skew, central processing unit duration per input/output operation, and estimated memory usage. The estimated processing characteristics are made on a request step basis. The request is classified as problematic responsive to determining one or more of the estimated characteristics of a request step exceed a corresponding threshold. In this manner, mechanisms for predicting bad query behavior are provided. Workload management of those requests may then be more successfully provided through workload throttles, filters, or even a more confident exception detection that correlates with the estimated bad behavior. | 06-24-2010 |
20100162252 | SYSTEM AND METHOD FOR SHIFTING WORKLOADS ACROSS PLATFORM IN A HYBRID SYSTEM - A system and associated method for shifting workloads across platform in a hybrid system. A first kernel governing a first platform of the hybrid system starts a process that is executable in a second platform of the hybrid system. The first kernel requests a second kernel governing the second platform to create a duplicate process of the process such that the process is executed in the second platform. The process represents the duplicate process in the first platform without consuming clock cycles of the first platform. During an execution of the duplicate process in the second platform, the first kernel services an I/O request of the duplicate process that is transferred from the second kernel to the first kernel. When the duplicate process is terminated, the process in the first platform is removed first before the duplicate process releases resources. | 06-24-2010 |
20100162253 | Real-time scheduling method and central processing unit based on the same - A central processing unit (CPU) and a real-time scheduling method applicable in the CPU are disclosed. The CPU may determine a first task set and a second task set from among assigned tasks, schedule the determined first task set in a single core to enable the task to be processed, and schedule the determined second task set in a multi-core to enable the task to be processed. | 06-24-2010 |
20100162254 | Apparatus and Method for Persistent Report Serving - A computer-readable medium is configured to receive a report processing request at a hierarchical report processor. The hierarchical report processor includes a parent process and at least one child process executing on a single processing unit, and is configured to process the report processing request as a task on the single processing unit. | 06-24-2010 |
20100169887 | Apparatus and Method for Parallel Processing of A Query - A computer readable storage medium comprises executable instructions to receive a query. A graph is built to represent jobs associated with the query. The jobs are assigned to parallel threads according to the graph. | 07-01-2010 |
20100169888 | VIRTUAL PROCESS COLLABORATION - Methods, apparatuses, and systems are presented for automating organization of multiple processes involving maintaining a uniform record of process threads using at least one server, each process thread comprising a representation of a collaborative process capable of involving a plurality of users, enabling at least one of the plurality of users to carry out a user action while interacting with one of a plurality of different types of application programs, and modifying at least one process thread in the uniform record of process threads in response to the user action carried out by the user. Modifying the at least one process thread may comprise generating the at least one process thread as a new process thread. Alternatively or in addition, modifying the at least one process thread may comprise modifying the at least one process thread as an existing process thread. At least one of the process threads may reflect user actions carried out by more than one of the plurality of users. | 07-01-2010 |
20100169889 | MULTI-CORE SYSTEM - A multi-core system includes: a first core that writes first data by execution of a first program, wherein the first core gives write completion notice after completion of the writing; a second core that refers to the written first data by execution of a second program; and a scheduler that instructs the second core to start the execution of the second program before the execution of the first program is completed when the scheduler is given the write completion notice from the first core by the execution of the first program. | 07-01-2010 |
20100175065 | WORKFLOW MANAGEMENT DEVICE, WORKFLOW MANAGEMENT METHOD, AND PROGRAM - This invention is directed to a workflow execution method capable of allocating a necessary license in accordance with the workflow contents and the license states of all task processing devices capable of executing a task, and preferentially utilizing the license in task execution in a cooperative task processing system capable of executing a plurality of tasks for document data as a workflow by a plurality of task processing devices. | 07-08-2010 |
20100175066 | SYSTEMS ON CHIP WITH WORKLOAD ESTIMATOR AND METHODS OF OPERATING SAME - A system on chip (SOC) includes a processor circuit configured to receive instruction information from an external source and to execute an instruction according to the received instruction information and a workload estimator circuit configured to monitor instruction codes executed in the processor circuit, to generate an estimate of a workload of the processor circuit based on the monitored instruction codes and to generate power supply voltage control signal based on the estimate of the workload. The SOC may further include a power management integrated circuit (PMIC) configured to receive the control signal and to adjust a power supply voltage provided to the SOC in response to the control signal. | 07-08-2010 |
20100192151 | Method for arranging schedules and computer using the same - A method for arranging schedules and a computer using the method are disclosed. The method comprises the steps of: recording a user behavior record in a predetermined time interval; filtering the user behavior record to generate an effective user behavior record; and generating a schedule according to the effective user behavior record. | 07-29-2010 |
20100192152 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processing device which has a plurality of process units for performing various kinds of processes includes a detecting unit that detects a processing loads of the process units; a determining unit that determines whether a total amount of the processing loads detected by the detecting unit is equal to or larger than a specific value; a designating unit that designates a process unit having a process state to be controlled, based on the processing loads of the process units detected by the detecting unit, when the determining unit determines that the total amount is equal to or larger than the specific value; a process identifying unit that identifies a process having an execution state to be controlled among processes being performed by the process unit designated by the designating unit; and a control unit that controls the execution state of the process identified by the process identifying unit. | 07-29-2010 |
20100199280 | SAFE PARTITION SCHEDULING ON MULTI-CORE PROCESSORS - One embodiment is directed to a method of generating a set of schedules for use by a partitioning kernel to execute a plurality of partitions on a plurality of processor cores included in a multi-core processor unit. The method includes determining a duration to execute each of the plurality of partitions without interference and generating a candidate set of schedules using the respective duration for each of the plurality of partitions. The method further includes estimating how much interference occurs for each partition when the partitions are executed on the multi-core processor unit using the candidate set of schedules and generating a final set of schedules by, for at least one of the partitions, scaling the respective duration in order to account for the interference for that partition. The method further includes configuring the multi-core processor unit to use the final set of schedules to control the execution of the partitions using at least two of the cores. | 08-05-2010 |
20100199281 | Managing the Processing of Processing Requests in a Data Processing System Comprising a Plurality of Processing Environments - Processing requests may be routed between a plurality of runtime environments, based on whether or not program(s) required for completion of the processing requests is/are loaded in a given runtime environment. Cost measures may be used to compare costs of processing a request in a local runtime environment and of processing the request at a non-local runtime environment. | 08-05-2010 |
20100205604 | SYSTEMS AND METHODS FOR EFFICIENTLY RUNNING MULTIPLE INSTANCES OF MULTIPLE APPLICATIONS - A system and method for managing multiple instances of a software application running on a single operating system is described. The system may be a server which hosts multiple copies of the same software application running in real time within a framework. The framework prevents the multiple copies of the application from interfering with one another. | 08-12-2010 |
20100205605 | SCHEDULING METHOD AND SYSTEM - A scheduling method and system. The method includes receiving by a computing system first data and second data associated with a user. The first data comprises user identification associated, an activity selection for an activity, and first scheduling information. The second data comprises geographical preference data. The computing system determines facilities associated with the activity. The facilities are located within boundaries specified by the geographical preference data. The computing system generates tentative reservations for the user at each facility. The computing system presents the tentative reservations data to the user. The computing system receives verification data from the user. The computing system posts the tentative reservations data in a social networking environment. The computing system stores the tentative reservations data. | 08-12-2010 |
20100205606 | SYSTEM AND METHOD FOR EXECUTING A COMPLEX TASK BY SUB-TASKS - A system, device and method for performing a task by sub-tasks are provided. A number of sub-tasks may be selected for execution and an execution order may be determined. A prologue for a preceding sub-task and an epilogue for a subsequent task may be executed. The same prologue and epilogue may be used for a number of sub-tasks pairs. Executing the prologue and epilogue may enable consecutive execution of sub-tasks. Other embodiments are described and claimed. | 08-12-2010 |
20100211953 | MANAGING TASK EXECUTION - Managing task execution includes: receiving a specification of a plurality of tasks to be performed by respective functional modules; processing a flow of input data using a dataflow graph that includes nodes representing data processing components connected by links representing flows of data between data processing components; in response to at least one flow of data provided by at least one data processing component, generating a flow of messages; and in response to each of the messages in the flow of messages, performing an iteration of a set of one or more tasks using one or more corresponding functional modules. | 08-19-2010 |
20100218190 | PROCESS MAPPING IN PARALLEL COMPUTING - A method of mapping processes to processors in a parallel computing environment where a parallel application is to be run on a cluster of nodes wherein at least one of the nodes has multiple processors sharing a common memory, the method comprising using compiler based communication analysis to map Message Passing Interface processes to processors on the nodes, whereby at least some more heavily communicating processes are mapped to processors within nodes. Other methods, apparatus, and computer readable media are also provided. | 08-26-2010 |
20100223618 | SCHEDULING JOBS IN A CLUSTER - There is provided a method and system for scheduling a job in a cluster, the cluster comprises multiple computing nodes, and the method comprises: defining rules for constructing virtual sub-clusters of the multiple computing nodes; constructing the multiple nodes in the cluster into multiple virtual sub-clusters based on the rules, wherein one computing node can only be included in one virtual sub-cluster; dispatching a received job to a selected virtual sub-cluster; and scheduling at least one computing node for the dispatched job in the selected virtual sub-cluster. Further, the job is dispatched to the selected virtual sub-cluster based on characteristics of the job and/or characteristics of virtual sub-clusters. The present invention can increase the throughput of scheduling effectively. | 09-02-2010 |
20100235840 | POWER MANAGEMENT USING DYNAMIC APPLICATION SCHEDULING - One embodiment provides a method of managing power in a datacenter having a plurality of servers. A number of policy settings are specified for the power center, including a power limit for the datacenter. The power consumption attributable to each of a plurality of applications executable as a job on one or more of the servers is determined. The power consumption attributable to each application may be further qualified according to the type of server on which the application is executed. Having determined the power consumption attributable to various applications executable as jobs, the applications may be executed on the servers as jobs such that the total power consumption attributable to the currently executed jobs remains within the selected datacenter power limit. | 09-16-2010 |
20100235841 | INFORMATION PROCESSING APPARATUS AND METHOD OF CONTROLLING SAME - There is disclosed an information processing apparatus and method for executing a workflow having a plurality of steps (and corresponding method). The information processing apparatus registers the workflow having a plurality of steps and manages a start parameter for indicating a condition for starting each step included in the workflow and an end parameter that is generated at an end of the each step. The apparatus determines a second step for following a first step based on the end parameter of the first step and the managed start parameters. | 09-16-2010 |
20100242040 | SYSTEM AND METHOD OF TASK ASSIGNMENT DISTRIBUTED PROCESSING SYSTEM - A method of task assignment in a distributed processing system including a plurality of processors is proposed. The method of task assignment includes calculating utilities of tasks to be processed in execution units included in each processor and arranging the calculated results in descending order; calculating utility difference values between the execution units included in each processor and outputting a highest difference value; comparing a utility of the task with the output highest difference value; designating the task to be assigned to the execution unit having the lowest utility in a processor in which the highest difference value is generated when the utility of the task is less than or equal to the output highest difference value; repeating the calculating, comparing, and designating in the order of the arranged tasks; and assigning the tasks to the designated targets. | 09-23-2010 |
20100251246 | System and Method for Generating Job Schedules - A system and method for generating a test environment schedule containing an order of executing job control language (JCL) jobs in a test computing environment is provided. The system comprises a memory which stores a seed schedule containing a plurality of members having common JCL jobs appropriate for different test environments, with each member containing a plurality of JCL jobs in a predetermined order of execution. The memory also stores a parameter file containing parameters for modifying the seed schedule according to a specific test environment. The system also includes an environment schedule module executable by a processor and is adapted to convert the seed schedule to the test environment schedule to be executed in the specific test environment as specified in the stored parameter file. | 09-30-2010 |
20100251247 | CHANGE MANAGEMENT AUTOMATION TOOL - A change management system for an IT environment or other enterprise level environment may comprise a server comprising memory and a controller. A change management application comprising machine readable instructions may be stored in the memory. The change management application may be arranged to perform the following steps: receive a plurality of work orders via a network to be performed during a maintenance period, concatenate the plurality of work orders to generate a master plan for performing the work orders during the maintenance period, and receive status updates for the work orders during the maintenance period down to the individual step level. A display may display a view of the master plan during the maintenance period. The view may include information related to the work orders and a status of the work orders. The status may be updated automatically based on status updates received by the change management application. | 09-30-2010 |
20100251248 | JOB PROCESSING METHOD, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED JOB PROCESSING PROGRAM AND JOB PROCESSING SYSTEM - For data obtaining target at execution of a new task, if a data set as a processing target is beforehand allocated to a data allocation area in an allocation-target execution server as a target of allocation, a schedule server of a job processing system sets the data set as the data obtaining target; if the data set as the processing target is not beforehand allocated to the data allocation area in any one of the execution servers, the schedule server sets the data in the external storage area as the data obtaining target; and if the data set as the processing target is beforehand allocated to the data allocation area in a second execution server other than the allocation-target execution server, the schedule server sets the data set allocated to the second execution server as the data obtaining target. | 09-30-2010 |
20100251249 | DEVICE MANAGEMENT SYSTEM AND DEVICE MANAGEMENT COMMAND SCHEDULING METHOD THEREOF - A device management system and device management scheduling method thereof, in which a server transmits to a client a scheduling context including a device management command and a schedule for the performing of the device management command, and the client generates a device management tree using the device management scheduling context, performs the command when a specific scheduling condition is satisfied, and, if necessary, reports the command performance result to the server, whereby the server performs a device management such as requesting a command to be performed under a specific condition, dynamically varying the scheduling condition, and the like. | 09-30-2010 |
20100262966 | MULTIPROCESSOR COMPUTING DEVICE - A computing device includes a first processor configured to operate at a first speed and consume a first amount power and a second processor configured to operate at a second speed and consume a second amount of power. The first speed is greater than the second speed and the first amount of power is greater than the second amount of power. The computing device also includes a scheduler configured to assign processes to the first processor only if the processes utilize their entire timeslice. | 10-14-2010 |
20100262967 | Completion Arbitration for More than Two Threads Based on Resource Limitations - A mechanism is provided for thread completion arbitration. The mechanism comprises executing more than two threads of instructions simultaneously in the processor, selecting a first thread from a first subset of threads, in the more than two threads, for completion of execution within the processor, and selecting a second thread from a second subset of threads, in the more than two threads, for completion of execution within the processor. The mechanism further comprises completing execution of the first and second threads by committing results of the execution of the first and second threads to a storage device associated with the processor. At least one of the first subset of threads or the second subset of threads comprise two or more threads from the more than two threads. The first subset of threads and second subset of threads have different threads from one another. | 10-14-2010 |
20100262968 | Execution Environment for Data Transformation Applications - The execution environment provides for scalability where components will execute in parallel and exploit various patterns of parallelism. Dataflow applications are represented by reusable dataflow graphs called map components, while the executable version is called a prepared map. Using runtime properties the prepared map is executed in parallel with a thread allocated to each map process. The execution environment not only monitors threads, detects and corrects deadlocks, logs and controls program exceptions, but also data input and output ports of the map components are processed in parallel to take advantage of data partitioning schemes. Port implementation supports multi-state null value tokens to more accurately report exceptions. Data tokens are batched to minimize synchronization and transportation overhead and thread contention. | 10-14-2010 |
20100275208 | Reduction Of Memory Latencies Using Fine Grained Parallelism And Fifo Data Structures - Software rendering and fine grained parallelism are utilized to reduce/ovoid memory latency in a multi-processor (MP) system. According to one embodiment, the management of the transfer of data from one processor to another in the MP environment is moved into a low overhead hardware system. The low overhead hardware system may be a FIFO (“First In First Out”) hardware control. Each FIFO may be real or virtual. | 10-28-2010 |
20100275209 | READER/WRITER LOCK WITH REDUCED CACHE CONTENTION - A scalable locking system is described herein that allows processors to access shared data with reduced cache contention to increase parallelism and scalability. The system provides a reader/writer lock implementation that uses randomization and spends extra space to spread possible contention over multiple cache lines. The system avoids updates to a single shared location in acquiring/releasing a read lock by spreading the lock count over multiple sub-counts in multiple cache lines, and hashing thread identifiers to those cache lines. Carefully crafted invariants allow the use of partially lock-free code in the common path of acquisition and release of a read lock. A careful protocol allows the system to reuse space allocated for a read lock for subsequent locking to avoid frequent reallocating of read lock data structures. The system also provides fairness for write-locking threads and uses object pooling techniques to make reduce costs associated with the lock data structures. | 10-28-2010 |
20100275210 | Execution engine for business processes - An execution engine is disclosed for executing business processes. An executable object model is generated for a business process document. Executable object models of business processes are assigned to virtual processors. | 10-28-2010 |
20100281482 | APPLICATION EFFICIENCY ENGINE - A system and a method are provided. Performance and capacity statistics, with respect to an application executing on one or more VMs, may be accessed and collected. The collected performance and capacity statistics may be analyzed to determine an improved hardware profile for efficiently executing the application on a VM. VMs with a virtual hardware configuration matching the improved hardware profile may be scheduled and deployed to execute the application. Performance and capacity statistics, with respect to the VMs, may be periodically analyzed to determine whether a threshold condition has occurred. When the threshold condition has been determined to have occurred, performance and capacity statistics, with respect to VMs having different configurations corresponding to different hardware profiles, may be automatically analyzed to determine an updated improved hardware profile. VMs for executing the application may be redeployed with virtual hardware configurations matching the updated improved profile. | 11-04-2010 |
20100281483 | PROGRAMMABLE SCHEDULING CO-PROCESSOR - A scheduling co-processor for scheduling the execution of threads on a processor is disclosed. In certain embodiments, the scheduling co-processor includes one or more engines (such as lookup tables) that are programmable with a Petri-net representation of a thread scheduling algorithm. The scheduling co-processor may further include a token list to store tokens associated with the Petri-net; an enabled-thread list to indicate which threads are enabled for execution in response to particular tokens being present in the token list; and a ready-thread list to indicate which threads from the enabled-thread list are ready for execution when data and/or space availability conditions associated with the threads are satisfied. | 11-04-2010 |
20100281484 | SHARED JOB SCHEDULING IN ELECTRONIC NOTEBOOK - Architecture that synchronizes a job to shared notebook eliminating the need for user intervention and guaranteeing that only one instance of the notebook client performs the task. A job tracking component creates and maintains tracking information of jobs processed against shared notebook information. A scheduling component synchronizes a new job against the shared notebook information based on the tracking information. The tracking information can be a file or cells stored at a root level of a hierarchical data collection that represents the electronic notebook. The file includes properties related to a job that has been processed. The properties are updated as new jobs are processed. Job scheduling includes whole file updates and/or incremental updates to the shared notebook information. | 11-04-2010 |
20100287555 | USING COMPOSITE SYSTEMS TO IMPROVE FUNCTIONALITY - Systems and methods are provided for enabling communication between a composite system providing additional functionality not contained in existing legacy systems and other existing systems using different commands, variables, protocols, methods, or instructions, when data may be located on more than one system. In an embodiment, multiple software layers are used to independently manage different aspects of an application. A business logic layer may be used in an embodiment to facilitate reading/writing operations on data that may be stored locally and/or on external systems using different commands, variables, protocols, methods, or instructions. A backend abstraction layer may be used in an embodiment in conjunction with the business logic layer to facilitate communication with the external systems. A user interface layer may be used in an embodiment to manage a user interface, a portal layer to manage a user context, and a process logic layer to manage a workflow. | 11-11-2010 |
20100287556 | Computer System, Control Apparatus For A Machine, In Particular For An Industrial Robot, And Industrial Robot - The invention relates to a computer system ( | 11-11-2010 |
20100287557 | METHOD FOR THE MANAGEMENT OF TASKS IN A DECENTRALIZED DATA NETWORK - In a method for the management of tasks in a decentralized data network with a plurality of nodes for carrying out the tasks, resources are distributed based on a mapping rule, in particular a hash function. A task that is to be suspended is distributed by dividing the process image of the task into segments and by distributing the segments over the nodes using the mapping rule. Thus, a distributed swap space is created so that tasks can also be carried out on nodes whose swap space is not sufficient on its own. The method can be used in embedded systems with a limited storage capacity and/or in a voltage distribution system, wherein the nodes are, for example, switching units in the voltage distribution system. The method can also be used in any other technical systems such as, for example, a power generation system, an automation system and the like. | 11-11-2010 |
20100293547 | INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS, AND PROGRAM - A loss of convenience that may occur when a process flow usable by a specific user is registered as a process flow commonly usable by multiple users is reduced. To accomplish this, an information processing apparatus includes a registration unit that registers a process flow for executing predetermined processing according to a predefined set value, the process flow being registered as a process flow that is usable by a specific user or a process flow that is commonly usable by a plurality of users, a changing unit that changes the process flow that is usable by the specific user to the process flow that is usable by the plurality of users, and a control unit that, when the changing unit changes the process flow, allows a user to change the set value to another set value. | 11-18-2010 |
20100293548 | METHOD AND COMPUTER SYSTEM FOR ADMINISTRATION OF MEDICAL APPLICATIONS EXECUTING IN PARALLEL - A method and a computer system are disclosed for administration of medical applications running in parallel. At least one embodiment of the method includes creation of a number of application components as a result a beginning of a number of user actions; provision of a module for parallel execution and/or for coordination of the previously created application components, provision of a least one communication interface for exchanging messages and/or data between an application component and a command which is of interest to the application component which has been initiated by one of the user actions, and removal of the application component created by a user action after the user action has ended. | 11-18-2010 |
20100299668 | Associating Data for Events Occurring in Software Threads with Synchronized Clock Cycle Counters - Methods, apparatuses, and computer-readable storage media are disclosed for reducing power by reducing hardware-thread toggling in a multi-processor. In a particular embodiment, a method is disclosed that includes collecting data from a plurality of software threads being processed by a processor, where the data for each of the events includes a value of an associated clock cycle counter upon occurrence of the event. Data is correlated for the events occurring for each of the plurality of threads by starting each of a plurality of clock cycle counters associated with the software threads at a common time. Alternatively, data is correlated for the events by logging a synchronizing event within each of the plurality of software threads. | 11-25-2010 |
20100299669 | Generation of a Comparison Task List of Task Items - A computing system generates and displays a comparison task list that reports differences between a source task list for a project and a modified task list for the project. The comparison task list may enable a user to determine the implications of changes to the project by providing a comparison of the source task list and the modified task list. The computing system generates the comparison task list by generating the comparison task list as a copy of the source task list. The computing system automatically adds each task item in the modified task list that does not have an equivalent task item in the comparison task list to the comparison task list at positions that depend on whether the task items have previously-processed sibling task items in the modified task list. When the computing system has processed each task item in the modified task list, the computing system displays the comparison task list. | 11-25-2010 |
20100306777 | WORKFLOW MESSAGE AND ACTIVITY CORRELATION - Embodiments are directed to generating trace events that are configured to report an association between a workflow activity and a message. A computer system receives a message over a communication medium, where the workflow activity includes a unique workflow activity identifier (ID) that uniquely identifies the workflow activity. The message also includes a unique message ID that uniquely identifies the message. The computer system generates a trace event that includes a combination of the unique workflow activity ID and the unique message ID. The trace event is configured to report the association between the workflow activity and the message. The computer system also stores the generated trace event in a data store. | 12-02-2010 |
20100318995 | THREAD SAFE CANCELLABLE TASK GROUPS - A scheduler in a process of a computer system schedules tasks of a task group for concurrent execution by multiple execution contexts. The scheduler provides a mechanism that allows the task group to be cancelled by an arbitrary execution context or an asynchronous error state. When a task group is cancelled, the scheduler sets a cancel indicator in each execution context that is executing tasks corresponding to the cancelled task group and performs a cancellation process on each of the execution contexts where a cancel indicator is set. The scheduler also creates local aliases to allow task groups to be used without synchronization by execution contexts that are not directly bound to the task groups. | 12-16-2010 |
20100318996 | METHODS AND SYSTEMS FOR SHARING COMMON JOB INFORMATION - Apparatus and methods are provided for utilizing a plurality of processing units. A method comprises selecting a pending job from a plurality of unassigned jobs based on a plurality of assigned jobs for the plurality of processing units and assigning the pending job to a first processing unit. Each assigned job is associated with a respective processing unit, wherein the pending job is associated with a first segment of information that corresponds to a second segment of information for a first assigned job. The method further comprises obtaining the second segment of information that corresponds to the first segment of information from the respective processing unit associated with the first assigned job, resulting in an obtained segment of information and performing, by the first processing unit, the pending job based at least in part on the obtained segment of information. | 12-16-2010 |
20100325631 | METHOD AND APPARATUS FOR INCREASING LOAD BANDWIDTH - A method and apparatus for dual-target register allocation is described, intended to enable the efficient mapping/renaming of registers associated with instructions within a pipelined microprocessor architecture. | 12-23-2010 |
20100325632 | Workload scheduling method and system with improved planned job duration updating scheme - A method for scheduling execution of a work unit in a data processing system comprises assigning to the work unit an expected execution duration; executing the work unit determining an actual execution duration of the work unit; determining a difference between the actual execution duration and the expected duration; and conditionally adjusting the expected execution duration assigned to the work unit based on the measured actual execution duration, wherein the conditionally adjusting includes preventing the adjustment of the expected execution duration in case said difference exceeds a predetermined threshold. The method further includes associating to the work unit a parameter having a prescribed value adapted to provide an indication of unconditional adjustment of the expected execution duration: in case said parameter takes the prescribed value, the expected duration assigned with the work unit based on the measured actual execution duration even if the difference in durations exceeds the predetermined threshold. | 12-23-2010 |
20100333094 | JOB-PROCESSING NODES SYNCHRONIZING JOB DATABASES - A first node of a network updates a first job database to indicate that a first job is executing or is about to be executed on the first node. Network nodes are synchronized so that other nodes update their respective job databases to indicate that the first job is executing on said first node. | 12-30-2010 |
20100333095 | Bulk Synchronization in Transactional Memory Systems - A method and system for acquiring multiple software locks in bulk is disclosed. When multiple locks need to be acquired, such as for atomic transactions in transactional memory systems, the disclosed techniques may be applied to consolidate computationally expensive memory barrier operations across the lock acquisitions. A system may acquire multiple locks in bulk, at least in part, by modifying values in one or more fields of multiple locks and by then performing a memory barrier operation to ensure that the modified values in the multiple locks are visible to other application threads. The technique may be repeated for locks that the system fails to acquire during earlier iterations until all required locks are acquired. The described technique may be applied to various scenarios including static and/or dynamic transactional locking protocols. | 12-30-2010 |
20100333096 | Transactional Locking with Read-Write Locks in Transactional Memory Systems - A system and method for transactional memory using read-write locks is disclosed. Each of a plurality of shared memory areas is associated with a respective read-write lock, which includes a read-lock portion indicating whether any thread has a read-lock for read-only access to the memory area and a write-lock portion indicating whether any thread has a write-lock for write access to the memory area. A thread executing a group of memory access operations as an atomic transaction acquires the proper read or write permissions before performing a memory operation. To perform a read access, the thread attempts to obtain the corresponding read-lock and succeeds if no other thread holds a write-lock for the memory area. To perform a write-access, the thread attempts to obtain the corresponding write-lock and succeeds if no other thread holds a write-lock or read-lock for the memory area. | 12-30-2010 |
20100333097 | METHOD AND SYSTEM FOR MANAGING A TASK - A computer readable storage medium including executable instructions for managing a task. Instructions include receiving a request. Instructions further include determining a task corresponding with the request using a request-to-task mapping. Instructions include obtaining a task entry corresponding with the task from a task store, where the task entry associates the task with an action and a predicate for performing the action. Instructions further include creating a task object in a task pool using the task entry. Instructions further include receiving an event notification at the task engine, where the event notification is associated with an event. Instructions further include determining whether the predicate for performing the action is satisfied by the event. Instructions further placing the task object in a task queue when the predicate for performing the action is satisfied by the event. | 12-30-2010 |
20110004879 | METHOD AND APPARATUS FOR ELIMINATING WAIT FOR BOOT-UP - A method and apparatus for eliminating wait for boot-up of an apparatus while simultaneously preventing increased power usage. The method includes predicting a boot-up schedule according to a determined usage schedule, and scheduling boot-up time according to the predicted boot-up schedule, wherein said boot-up schedule eliminates wait for boot-up while simultaneously preventing increased power usage. | 01-06-2011 |
20110004880 | System and Method for Data Transformation using Dataflow Graphs - A system and method for managing data, such as in a data warehousing, analysis, or similar applications, where dataflow graphs are expressed as reusable map components, at least some of which are selected from a library of components, and map components are assembled to create an integrated dataflow application. Composite map components encapsulate a dataflow pattern using other maps as subcomponents. Ports are used as link points to assemble map components and are hierarchical and composite allowing ports to contain other ports. The dataflow application may be executed in a parallel processing environment by recognizing the linked data processes within the map components and assigning threads to the linked data processes. | 01-06-2011 |
20110004881 | LOOK-AHEAD TASK MANAGEMENT - A method comprising receiving tasks for execution on at least one processor, and processing at least one task within one processor. To decrease the turn-around time of task processing, a method comprises parallel to processing the at least one task, verifying readiness of at least one next task assuming the currently processed task is finished, preparing a readystructure for the at least one task verified as ready, and starting the at least one task verified as ready using the ready-structure after the currently processed task is finished. | 01-06-2011 |
20110010716 | Domain Bounding for Symmetric Multiprocessing Systems - Methods and apparatuses for developing symmetric and asymmetric software applications on a single monolithic symmetric multiprocessing operating system are disclosed. An enabling framework for one or all of the following software design patterns; application work load sharing between all processors present in a multi-processor system in a symmetric fashion, application work load sharing between all processors present in a multi-processor system in a asymmetric fashion using task to processor soft affinity declarations, application work load sharing between all processors present in a multi-processor system using bound computational domains may be provided. Further, a particular computational task or a set of computational tasks may be bound to a particular processing unit. Subsequently, when one such task is to be scheduled, the symmetric multiprocessing operating system ensures that the bound processing unit processes the instruction. When the bound processing unit is not processing the particular computational instruction, the bound processing unit may enter a low power or idle state. | 01-13-2011 |
20110010717 | JOB ASSIGNING APPARATUS AND JOB ASSIGNMENT METHOD - A job assigning apparatus connected to a plurality of arithmetic units for assigning a job to each of the arithmetic units, the job assigning apparatus includes a power consumption acquiring processor for acquiring power consumptions with respect to each of the arithmetic units, a selector for selecting one of the arithmetic units as a submission destination in increasing order of the power consumptions acquired by the power consumption acquiring processor, and a job submitting processor for submitting a job to the submission destination. | 01-13-2011 |
20110010718 | ELECTRONIC DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT HAVING COMPUTER-READABLE INFORMATION PROCESSING PROGRAM - An electronic device includes a status management unit detecting a change in a status of the electronic device and recognizing the status; an added program control unit applying an added program to a program of the electronic device in response to a validation request for the added program, the added program being capable of dynamically interrupting the program of the electronic device with a process; and an application determination information storage unit storing application determination information indicating whether the added program can be applied to the program of the electronic device depending on the status of the electronic device recognized by the status management unit. The added program control unit determines whether the added program can be applied based on (1) the status of the electronic device recognized by the status management unit upon reception of the validation request, and (2) the application determination information. | 01-13-2011 |
20110010719 | ELECTRONIC DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An electronic device includes a control information storing unit; a setting unit configured to request a user to specify, for each first program in the electronic device, a reception setting indicating whether to allow reception of a second program to be applied to the first program and to store the reception setting as control information for the first program in the control information storing unit, the second program being configured to insert a process in a process of the first program; a reception determining unit configured to determine whether to allow reception of the second program based on the control information for the first program; and a receiving unit configured to receive or refuse to receive the second program according to the determination result of the reception determining unit. | 01-13-2011 |
20110010720 | SYSTEM AND METHOD FOR MANAGING ELECTRONIC ASSETS - An asset management system is provided which comprises one or more controllers, which operate as main servers and can be located at the headquarters of an electronic device manufacturer to remotely control their operations at any global location. The controller can communicate remotely over the Internet or other network to control one or more secondary or remote servers, herein referred to as appliances. The appliances can be situated at different manufacturing, testing or distribution sites. The controller and appliances comprise hardware security modules (HSMs) to perform sensitive and high trust computations, store sensitive information such as private keys, perform other cryptographic operations, and establish secure connections between components. The HSMs are used to create secure end-points between the controller and the appliance and between the appliance and the secure point of trust in an asset control core embedded in a device. | 01-13-2011 |
20110023039 | THREAD THROTTLING - Techniques for scheduling a thread running in a computer system are disclosed. Example computer systems may include but are not limited to a multiprocessor having first and second cores, an operating system, and a memory bank for storing data. The example methods may include but are not limited to measuring a temperature of the memory bank and determining whether the thread includes a request for data stored in the memory bank, if the temperature of the memory bank exceeds a predetermined temperature. The methods may further include but are not limited to slowing down the execution of the thread upon determining if the thread includes a request for data. | 01-27-2011 |
20110023040 | POWER-EFFICIENT INTERACTION BETWEEN MULTIPLE PROCESSORS - A technique for processing instructions in an electronic system is provided. In one embodiment, a processor of the electronic system may submit a unit of work to a queue accessible by a coprocessor, such as a graphics processing unit. The coprocessor may process work from the queue, and write a completion record into a memory accessible by the processor. The electronic system may be configured to switch between a polling mode and an interrupt mode based on progress made by the coprocessor in processing the work. In one embodiment, the processor may switch from an interrupt mode to a polling mode upon completion of a threshold amount of work by the coprocessor. Various additional methods, systems, and computer program products are also provided. | 01-27-2011 |
20110023041 | PROCESS MANAGEMENT SYSTEM AND METHOD FOR MONITORING PROCESS IN AN EMBEDDED ELECTRONIC DEVICE - A checking method for a process in an embedded electronic device includes the following steps. A name of an application is recorded to an application recorder. The application is executed by a system processor. An active application list is acquired from the system processor. An execute control may determine if the name of the recorded application in the application recorder exists in the active application list. If the name of the recorded application does not exist, the system processor may shut down at least one child process related to the application. | 01-27-2011 |
20110023042 | SCALABLE SOCKETS - A data processing system supporting a network interface device and comprising: a plurality of sets of one or more data processing cores; and an operating system arranged to support at least one socket operable to accept data received from the network, the data belonging to one of a plurality of data flows; wherein the socket is configured to provide an instance of at least some of the state associated with the data flows per said set of data processing cores. | 01-27-2011 |
20110023043 | EXECUTING MULTIPLE THREADS IN A PROCESSOR - Provided are a method, system, and program for executing multiple threads in a processor. Credits are set for a plurality of threads executed by the processor. The processor alternates among executing the threads having available credit. The processor decrements the credit for one of the threads in response to executing the thread and initiates an operation to reassign credits to the threads in response to depleting all the thread credits. | 01-27-2011 |
20110023044 | SCHEDULING HIGHLY PARALLEL JOBS HAVING GLOBAL INTERDEPENDENCIES - A method of scheduling highly parallel jobs with global interdependencies is provided herein. The method includes the following steps: grouping input elements, each group being associated with an interdependency tag reflecting a level of interdependency between data associated with different input elements within a group; clustering the groups into collections of groups, wherein the clustered groups are associated with an interdependency tag reflecting a level of interdependency between groups, above a specified value; applying a conflict check to the collections of groups and to active jobs of a working set, to yield a conflict level between each collection of groups and each active job, by analyzing the interdependency tags of the collections of groups vis à vis interdependency tags associated with the active jobs; and adding collections of groups into the working set, wherein added collections of groups are associated with a conflict level below an acceptable conflict level. | 01-27-2011 |
20110029976 | PROCESSING SINGLETON TASK(S) ACROSS ARBITRARY AUTONOMOUS SERVER INSTANCES - Large scale internet services may be implemented using multiple discrete server instances. Some tasks of the large scale internet services may be singleton tasks, which may be advantageously processed by a sub-set of the server instances (e.g., merely one instance). Accordingly, as provided herein, a singleton task may be processed in a reliable manner based upon one or more instances of a protocol executed across a set of arbitrary autonomous server instances. In one example, the protocol may determine whether a lease for a singleton task is valid or expired. If the lease is expired, then an attempt to claim the lease may be performed by updating a current lease expiration with a new lease expiration. If the attempt is successful, then the singleton task may be processed until the new lease expiration expires. | 02-03-2011 |
20110029977 | POLICY BASED INVOCATION OF WEB SERVICES - Techniques for orchestrating workflows are disclosed herein. In an embodiment, a method of orchestrating a workflow is disclosed. In an embodiment, data is stored in a policy file which associates attributes with processes. User input is received. A process associated with an attribute is selected, where the attribute is based on the user input. The selected process is performed as part of the workflow. Also, processes may be added dynamically as part of any category inside the policy file without having to recompile or redesign the logic of the BPEL project. | 02-03-2011 |
20110035749 | Credit Scheduler for Ordering the Execution of Tasks - A method for scheduling the execution of tasks on a processor is disclosed. The purpose of the method is in part to serve the special needs of soft real-time tasks, which are time-sensitive. A parameter Δ is an estimate of the amount of time required to execute the task. Another parameter Γ is the maximum amount of time that the task is to spend in a queue before being executed. In the illustrative embodiment, the preferred wait time Γ | 02-10-2011 |
20110035750 | PROCESSING RESOURCE APPARATUS AND METHOD OF SYNCHRONISING A PROCESSING RESOURCE - A processing resource apparatus comprises a reference processing module comprising a set of reference stateful elements and a target processing module comprising a set of target stateful elements. A scan chain having a first mode for supporting manufacture testing is also provided, the scan chain being arranged to couple the reference processing module to the target processing module. The scan chain also has a second mode capable of synchronising the set of target stateful elements with the set of reference stateful elements in response to a synchronisation signal. | 02-10-2011 |
20110041131 | MIGRATING TASKS ACROSS PROCESSORS - The present disclosure is directed to a method for managing tasks in a computer system having a plurality of CPUs. Each task in the computer system may be configured to indicate a migration ready indicator of the task. The migration ready indicator for a task may be given when the set of live data for that task reduces or its working set of memory changes. The method may comprise associating a migration readiness queue with each of the plurality of CPUs, the migration readiness queue having a front-end and a back-end; analyzing a task currently executing on a particular CPU, wherein the particular CPU is one of the plurality of CPUs; placing the task in the migration readiness queue of the particular CPU based on status of the task and/or the migration ready indicator of the task; and selecting at least one queued task from the front-end of the migration readiness queue of the particular CPU for migration when the particular CPU receives a task migration command. | 02-17-2011 |
20110041132 | ELASTIC AND DATA PARALLEL OPERATORS FOR STREAM PROCESSING - A method to optimize performance of an operator on a computer system includes determining whether the system is busy, decreasing a software thread level within the operator if the system is busy, and increasing the software thread level within the operator if the system is not busy and a performance measure of the system at a current software thread level of the operator is greater than a performance measure of the system when the operator has a lower software thread level. | 02-17-2011 |
20110041133 | PROCESSING OF STREAMING DATA WITH A KEYED DELAY - A keyed delay is used in the processing of streaming data to decrease the processing performed and the output provided. A first event, within a particular window, having a particular key starts a delay condition. Arriving events with the same key replace the previous arrival for that key until the delay condition is satisfied. In response thereto, the latest event with that key is output. | 02-17-2011 |
20110047552 | ENERGY-AWARE PROCESS ENVIRONMENT SCHEDULER - A device receives a request associated with a process, and determines one or more current states of one or more process resources used to execute the process request. The device also calculates a power consumption associated with execution of the process request by the one or more process resources, and assigns an urgency for the process request, where the urgency corresponds to a time-variant parameter that indicates a measure of necessity for the execution of the process request. The device further determines whether the execution of the process request can be delayed to a future time based on the one or more current states, the power consumption, and the urgency, and causes, based on the determination, the process request to be executed or delayed to the future time. | 02-24-2011 |
20110055838 | OPTIMIZED THREAD SCHEDULING VIA HARDWARE PERFORMANCE MONITORING - A system and method for efficient dynamic scheduling of tasks. A scheduler within an operating system assigns software threads of program code to computation units. A computation unit may be a microprocessor, a processor core, or a hardware thread in a multi-threaded core. The scheduler receives measured data values from performance monitoring hardware within a processor as the one or more processors execute the software threads. The scheduler may be configured to reassign a first thread assigned to a first computation unit coupled to a first shared resource to a second computation unit coupled to a second shared resource. The scheduler may perform this dynamic reassignment in response to determining from the measured data values a first measured value corresponding to the utilization of the first shared resource exceeds a predetermined threshold and a second measured value corresponding to the utilization of the second shared resource does not exceed the predetermined threshold. | 03-03-2011 |
20110055839 | Multi-Core/Thread Work-Group Computation Scheduler - Execution units process commands from one or more command queues. Once a command is available on the queue, each unit participating in the execution of the command atomically decrements the command's work groups remaining counter by the work group reservation size and processes a corresponding number of work groups within a work group range. Once all work groups within a range are processed, an execution unit increments a work group processed counter. The unit that increments the work group processed counter to the value stored in a work groups to be executed counter signals completion of the command. Each execution unit that access a command also marks a work group seen counter. Once the work groups processed counter equals the work groups to be executed counter and the work group seen counter equals the number of execution units, the command may be removed or overwritten on the command queue. | 03-03-2011 |
20110061053 | MANAGING PREEMPTION IN A PARALLEL COMPUTING SYSTEM - This present invention provides a portable user space application release/reacquire of adapter resources for a given job on a node using information in a network resource table. The information in the network resource table is obtained when a user space application is loaded by some resource manager. The present invention provides a portable solution that will work for any interconnect where adapter resources need to be freed and reacquired without having to write a specific function in the device driver. In the present invention, the preemption request is done on a job basis using a key or “job key” that was previously loaded when the user space application or job originally requested the adapter resources. This is done for each OS instance where the job is run. | 03-10-2011 |
20110061054 | METHOD AND APPARATUS FOR SCHEDULING EVENT STREAMS - Apparatus and method for scheduling event streams. The apparatus includes (i) an interface for receiving event streams which are placed in queues and (ii) a scheduler which selects at least one event stream for dispatch depending on sketched content information data of the received event streams. The scheduler includes a sketching engine for sketching the received event streams to determine content information data and a selection engine for selecting at least one received event stream for dispatch depending on the determined content information data of the received event streams. The method includes the steps of (i) determining content information data about the content of event streams and (ii) selecting at least one event stream from the event streams for dispatch depending on the content information data. A computer program, when executed by a computer, causes the computer to perform the steps of the above method. | 03-10-2011 |
20110061055 | SYSTEM AND METHOD FOR GENERATING COMPUTING SYSTEM JOB FLOWCHARTS - A system and method for automatically generating flowcharts based on jobs within a mainframe job scheduling system is disclosed. The system may be interfaced through a web browser over a network (e.g., Internet) in order to configure a job flowchart request. The system includes a job flow utility employing rules and logic to execute a Job Control Language (JCL) script thereby invoking the creation of a job schedule based from a scheduling library and generates a delimited set of data that is stored within a database or saved as a delimited text file. The system also enables a user to view a job flowchart online or download the text-delimited file to open within existing charting applications. | 03-10-2011 |
20110067029 | THREAD SHIFT: ALLOCATING THREADS TO CORES - Techniques are generally described for allocating a thread to heterogeneous processor cores. Example techniques may include monitoring real time computing data related to the heterogeneous processor cores processing the thread, allocating the thread to the heterogeneous processor cores based, at least in part, on the real time computing data, and/or executing the thread by the respective allocated heterogeneous processor core. | 03-17-2011 |
20110067030 | FLOW BASED SCHEDULING - A job scheduler may schedule concurrent distributed jobs in a computer cluster by assigning tasks from the running jobs to compute nodes while balancing fairness with efficiency. Determining which tasks to assign to the compute nodes may be performed using a network flow graph. The weights on at least some of the edges of the graph encode data locality, and the capacities provide constraints that ensure fairness. A min-cost flow technique may be used to perform an assignment of the tasks represented by the network flow graph. Thus, online task scheduling with locality may be mapped onto a network flow graph, which in turn may be used to determine a scheduling assignment using min-cost flow techniques. The costs may encode data locality, fairness, and starvation-freedom. | 03-17-2011 |
20110067031 | Information Processing Apparatus and Control Method of the Same - According to one embodiment, an information processing apparatus for executing at least one executing target program, the apparatus includes: a sensor module configured to detect whether an operator is absent or not; a log information acquiring module configured to acquire log information including information about a date and time on which whether the operator is absent or not is detected by the sensor module and information about whether the operator is absent or not; a scheduling module configured to analyze an absence time zone in which the operator is absent based on the log information acquired by the log information acquiring module and to set to execute the at least one executing target program in the absence time zone based on a result of the analysis; and a processor configured to execute the at least one executing target program in the absence time zone. | 03-17-2011 |
20110072432 | METHOD TO AUTOMATICALLY REDIRECT SRB ROUTINES TO A zIIP ELIGIBLE ENCLAVE - A Method to redirect SRB routines from otherwise non-zIIP eligible processes on an IBM z/OS series mainframe to a zIIP eligible enclave is disclosed. This redirection is achieved by intercepting otherwise blocked operations and allowing them to complete processing without errors imposed by the zIIP processor configuration. After appropriately intercepting and redirecting these blocked operations more processing may be performed on the more financially cost effective zIIP processor by users of mainframe computing environments. | 03-24-2011 |
20110072433 | Method to Automatically ReDirect SRB Routines to a ZIIP Eligible Enclave - A Method to redirect SRB routines from otherwise non-zIIP eligible processes on an IBM z/OS series mainframe to a zIIP eligible enclave is disclosed. This redirection is achieved by intercepting otherwise blocked operations and allowing them to complete processing without errors imposed by the zIIP processor configuration. After appropriately intercepting and redirecting these blocked operations more processing may be performed on the more financially cost effective zIIP processor by users of mainframe computing environments. | 03-24-2011 |
20110072434 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING A PROCESSING ENTITY TASK - A system, computer program and a method for scheduling a processing entity task in a multiple-processing entity system, the method includes initializing a scheduler; receiving a task data structure indicative that a pre-requisite to an execution of task to be executed by a processing entity is a completion of a peripheral task that is executed by a peripheral; wherein the peripheral updates a peripheral task completion indicator once the peripheral task is completed; wherein the peripheral task completion indicator is accessible by the scheduler; and scheduling, by the scheduler, the task in response to the peripheral task completion indicator. | 03-24-2011 |
20110078688 | Virtualizing A Processor Time Counter - In one embodiment, the present invention includes a method for determining a scaling factor between a frequency of a first processor and a frequency of a second processor after a guest software is migrated from first processor to the second processor, and executing the guest software on the second processor using a virtual counter based on a physical counter of the second processor and the scaling factor. Other embodiments are described and claimed. | 03-31-2011 |
20110078689 | Address Mapping for a Parallel Thread Processor - A method for thread address mapping in a parallel thread processor. The method includes receiving a thread address associated with a first thread in a thread group; computing an effective address based on a location of the thread address within a local window of a thread address space; computing a thread group address in an address space associated with the thread group based on the effective address and a thread identifier associated with a first thread; and computing a virtual address associated with the first thread based on the thread group address and a thread group identifier, where the virtual address is used to access a location in a memory associated with the thread address to load or store data. | 03-31-2011 |
20110078690 | Opcode-Specified Predicatable Warp Post-Synchronization - One embodiment of the present invention sets forth a technique for performing a method for synchronizing divergent executing threads. The method includes receiving a plurality of instructions that includes at least one set-synchronization instruction and at least one instruction that includes a synchronization command, and determining an active mask that indicates which threads in a plurality of threads are active and which threads in the plurality of threads are disabled. For each instruction included in the plurality of instructions, the instruction is transmitted to each of the active threads included in the plurality of threads. If the instruction is a set-synchronization instruction, then a synchronization token, the active mask and the synchronization point is each pushed onto a stack. Or, if the instruction is a predicated instruction that includes a synchronization command, then each active thread that executes the predicated instruction is monitored to determine when the active mask has been updated to indicate that each active thread, after executing the predicated instruction, has been disabled. | 03-31-2011 |
20110088035 | RETROSPECTIVE EVENT PROCESSING PATTERN LANGUAGE AND EXECUTION MODEL EXTENSION - A novel and useful method, system and framework for extending event processing pattern language to include constructs and patterns in the language to support historical patterns and associated retrospective event processing that enable a user to define patterns that consist of both on-line streaming and historical (retrospective) patterns. This enables entire functions to be expressed in a single pattern language and also enables event processing optimization whereby function processing is mapped to a plurality of event processing agents (EPAs). The EPAs in turn are assigned to a physical processor and to threads within the processor. | 04-14-2011 |
20110088036 | Automated Administration Using Composites of Atomic Operations - Various techniques for automatically administering software systems using composites of atomic operations are disclosed. One method, which can be performed by an automation server, involves accessing information representing an activity that includes a first operation and a second operation. The information indicates that the second operation processes a value that is generated by the first operation. The method generates a sequence number as well as an output structure, which associates the sequence number with an output value generated by the first operation, and an input structure, which associates the sequence number with an input value consumed by the second operation. The method sends a message, via a network, to an automation agent implemented on a computing device. The computing device implements a software target of the first operation. The message includes information identifying the first operation as well as the output structure. | 04-14-2011 |
20110093856 | Thermal-Based Job Scheduling Among Server Chassis Of A Data Center - Thermal-based job scheduling among server chassis of a data center including identifying, by a data center management module in dependence upon a threshold fan speed for each server chassis, a plurality of server chassis having servers upon which one or more compute intensive jobs are executing, the data center management module comprising a module of automated computing machinery; identifying, by the data center management module, the compute intensive jobs currently executing on the identified plurality of server chassis; and moving, by the data center management module, the execution of the compute intensive jobs to one or more servers of chassis for compute intensive jobs. | 04-21-2011 |
20110093857 | Multi-Threaded Processors and Multi-Processor Systems Comprising Shared Resources - An apparatus is provided comprising at least two processing entities. Shared resources are usable by a first and a second processing entity. A use of the shared resources is detected, and the execution of instructions associated with said processing entities is controlled based on the detection. | 04-21-2011 |
20110099550 | ANALYSIS AND VISUALIZATION OF CONCURRENT THREAD EXECUTION ON PROCESSOR CORES. - An analysis and visualization is used to depict how a concurrent application executes threads on processor cores over time. With the analysis and visualization, a developer can readily identify thread migrations and thread affinity bugs that can degrade performance of the concurrent application. An example receives information regarding processes or threads running during a selected period of time. The information is processed to determine which processor cores are executing which threads over the selected period of time. The information is analyzed and executing threads for each core are depicted as channel segments over time, and can be presented in a graphical display. The visualization can help a developer identify areas of code that can be modified to avoid thread migration or to reduce thread affinity bugs to improve processor performance of concurrent applications. | 04-28-2011 |
20110099551 | Opportunistically Scheduling and Adjusting Time Slices - Computerized methods, computer systems, and computer-readable media for governing how virtual processors are scheduled to particular logical processors are provided. A scheduler is employed to balance a load imposed by virtual machines, each having a plurality of virtual processors, across various logical processors (comprising a physical machine) that are running threads in parallel. The threads are issued by the virtual processors and often cause spin waits that inefficiently consume capacity of the logical processors that are executing the threads. Upon detecting a spin-wait state of the logical processor(s), the scheduler will opportunistically grant time-slice extensions to virtual processors that are running a critical section of code, thus, mitigating performance loss on the front end. Also, the scheduler will mitigate performance loss on the back end by opportunistically de-scheduling then rescheduling a virtual machine in a spin-wait state to render the logical processor(s) available for other work in the interim. | 04-28-2011 |
20110107337 | Hierarchical Reconfigurable Computer Architecture - A reconfigurable hierarchical computer architecture having N levels, where N is an integer value greater than one, wherein said N levels include a first level including a first computation block including a first data input, a first data output and a plurality of computing nodes interconnected by a first connecting mechanism, each computing node including an input port, a functional unit and an output port, the first connecting mechanism capable of connecting each output port to the input port of each other computing node; and a second level including a second computation block including a second data input, a second data output and a plurality of the first computation blocks interconnected by a second connecting means for selectively connecting the first data output of each of the first computation blocks and the second data input to each of the first data inputs and for selectively connecting each of the first data outputs to the second data output. | 05-05-2011 |
20110107338 | Selecting isolation level for an operation based on manipulated objects - Concurrency control overhead in transactional memory and main memory databases is reduced by automatically selecting the appropriate isolation level for each operation based on the objects accessed by the operation. | 05-05-2011 |
20110107339 | Inner Process - Methods, systems, and products for computer processing. In one general embodiment, the method comprises running an inner process in the context of an executing thread wherein the thread has an original address space in memory and hiding at least a portion of the memory from the inner process. The inner process may run on the same credentials as the thread. Running the inner process may include creating a new address space for the inner process in the memory and assigning the new address space to the thread, so that the inner process comprises its own address space. The inner process may he allowed to access only the new address space. The kernel may maintain the thread's original address space along with the new address space, so that multiple address spaces exist for a particular thread. The kernel may pass selected data from the thread to the inner process. | 05-05-2011 |
20110107340 | Clustering Threads Based on Contention Patterns - Techniques for grouping two or more threads based on lock contention information are provided. The techniques include determining lock contention information with respect to two or more threads, using the lock contention information with respect to the two or more threads to determine lock affinity between the two or more threads, using the lock affinity between the two or more threads to group the two or more threads into one or more thread clusters, and using the one or more thread clusters to perform scheduling of one or more threads. | 05-05-2011 |
20110107341 | JOB SCHEDULING WITH OPTIMIZATION OF POWER CONSUMPTION - A scheduler is provided, which takes into account the location of the data to be accessed by a set of jobs. Once all the dependencies and the scheduling constraints of the plan are respected, the scheduler optimizes the order of the remaining jobs to be run, also considering the location of the data to be accessed. Several jobs needing an access to a dataset on a specific disk may be grouped together so that the grouped jobs are executed in succession, e.g., to prevent activating and deactivating the storage device several times, thus improving the power consumption and also avoiding input output performances degradation. | 05-05-2011 |
20110113429 | INCIDENT MANAGEMENT METHOD AND OPERATION MANAGEMENT SERVER - An operation management server, including an incident-job relation specifying unit, is responsive to the occurrence of an incident generated in an business system to refer to the incident table for relating the incident to hosts and the job group definition table from a job management server in order to specify the job and job group to be executed by the host on which the incident is generated, a job execution estimation unit for specifying the job to be reexecuted due to the occurrence of the incident and the unexecuted job in the job group, and an impact on job execution calculation unit for determining the impact on job execution which is the influence by the incident on the business system by relating the incident to the specified job. | 05-12-2011 |
20110113430 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM STORING PROGRAM - An information processing system includes a setting unit, an obtaining unit, a calculating unit, a display information generating unit, and an updating unit. The setting unit sets activity schedule information indicating an activity schedule of a user in an evaluation target period on the basis of an activity which is selected from among plural activities. The obtaining unit obtains activity information specifying the activity that has been performed by at least one point of time within the evaluation target period. The calculating unit calculates a total environmental load value of the activity in the evaluation target period. The display information generating unit generates display information including the total environmental load value and a target value of an environmental load. The updating unit updates the activity scheduled in the activity schedule information. | 05-12-2011 |
20110119672 | Multi-Core System on Chip - A multi-core system on a chip ( | 05-19-2011 |
20110119673 | CROSS-CHANNEL NETWORK OPERATION OFFLOADING FOR COLLECTIVE OPERATIONS - A Network Interface (NI) includes a host interface, which is configured to receive from a host processor of a node one or more cross-channel work requests that are derived from an operation to be executed by the node. The NI includes a plurality of work queues for carrying out transport channels to one or more peer nodes over a network. The NI further includes control circuitry, which is configured to accept the cross-channel work requests via the host interface, and to execute the cross-channel work requests using the work queues by controlling an advance of at least a given work queue according to an advancing condition, which depends on a completion status of one or more other work queues, so as to carry out the operation. | 05-19-2011 |
20110126200 | Scheduling for functional units on simultaneous multi-threaded processors - A method and system for scheduling threads on simultaneous multithreaded processors are disclosed. Hardware and operating system communicate with one another providing information relating to thread attributes for threads executing on processing elements. The operating system determines thread scheduling based on the information. | 05-26-2011 |
20110126201 | Event Processing Networks - A hybrid event processing network (EPN) having at least one event processing agent (EPA) consists of a first set of EPAs defined declaratively and a second set of EPAs defined dynamically at runtime via an interface. Deploying the hybrid EPN includes loading the hybrid EPN, constructing an EPN structure, and creating indexes of nodes of the EPN structure. Deploying the hybrid EPN further includes representing an event in a hybrid EPN, and, in response to the event occurrence at an event source, receiving a notification from the hybrid EPN based on the event, and publishing the notification in an event channel. Embodiments of the invention includes propagating the event received within the hybrid EPN, determining a subsequent EPA associated with the event within the hybrid EPN, and propagating the event to the subsequent EPA in the hybrid EPN until the last element of the hybrid EPN is reached. | 05-26-2011 |
20110126202 | Thread folding tool - A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display. | 05-26-2011 |
20110126203 | Efficient Input/Output-Aware Multi-Processor Virtual Machine Scheduling - Computerized methods, computer systems, and computer-readable media for governing how virtual processors are scheduled to particular logical processors are provided. A scheduler is employed to balance a CPU-intensive workload imposed by virtual machines, each having a plurality of virtual processors supported by a root partition, across various logical processors that are running threads and input/output (I/O) operations in parallel. Upon measuring a frequency of the I/O operations performed by a logical processor that is mapped to the root partition, a hardware-interrupt rate is calculated as a function of the frequency. The hardware-interrupt rate is compared against a predetermined threshold rate to determine a level of an I/O-intensive workload being presently carried out by the logical processor. When the hardware-interrupt rate surpasses the predetermined threshold rate, the scheduler refrains from allocating time slices on the logical processor to the virtual machines. | 05-26-2011 |
20110131580 | MANAGING TASK EXECUTION ON ACCELERATORS - Execution of tasks on accelerator units is managed. The managing includes multi-level grouping of tasks into groups based on defined criteria, including start time of tasks and/or deadline of tasks. The task groups and possibly individual tasks are mapped to accelerator units to be executed. During execution, redistribution of a task group and/or an individual task may occur to optimize a defined energy profile. | 06-02-2011 |
20110131581 | Scheduling Virtual Interfaces - A mechanism is provided for scheduling virtual interfaces having at least one virtual interface scheduler, a virtual interface context cache and a pipeline with a number of processing units. The virtual interface scheduler is configured to send a lock request for a respective virtual interface to the virtual interface context cache. The virtual interface context cache is configured to lock a virtual interface context of the respective virtual interface and to send a lock token to the virtual interface scheduler in dependence on said lock request. The virtual interface context cache configured to hold a current lock token for the respective virtual interface context and to unlock the virtual interface context, if a lock token of an unlock request received from the pipeline matches the held current lock token. | 06-02-2011 |
20110138391 | CONTINUOUS OPTIMIZATION OF ARCHIVE MANAGEMENT SCHEDULING BY USE OF INTEGRATED CONTENT-RESOURCE ANALYTIC MODEL - A system and associated method for continuously optimizing data archive management scheduling. A job scheduler receives, from an archive management system, inputs of task information, replica placement data, infrastructure topology data, and resource performance data. The job scheduler models a flow network that represents data content, software programs, physical devices, and communication capacity of the archive management system in various levels of vertices according to the received inputs. An optimal path in the modeled flow network is computed as an initial schedule, and the archive management system performs tasks according to the initial schedule. The operations of scheduled tasks are monitored and the job scheduler produces a new schedule based on feedbacks of the monitored operations and predefined heuristics. | 06-09-2011 |
20110138392 | OPERATING METHOD FOR A COMPUTER WITH PERFORMANCE OPTIMIZATION BY GROUPING APPLICATIONS - In at least one embodiment, if the pre-start level has the value empty container, the computer creates a container within the framework of the pre-start but does not load any application into the container. If the pre-start level has the value application, the computer creates a respective container within the framework of the pre-start for each application. If the pre-start level has a higher value, the computer determines within the framework of the pre-start a degree of grouping for the applications assigned to the respective pre-started unit, and groups the applications in accordance with the degree of grouping determined into at least one container group. Within the framework of the processing of the complex tasks, the computer terminates on switching from one application to another application, the application still being executed only if the application involves an application not able to be suspended. | 06-09-2011 |
20110145826 | MECHANISM FOR PARTITIONING PROGRAM TREES INTO ENVIRONMENTS - Partitioning continuation based runtime programs. Embodiments may include differentiating activities of a continuation based runtime program between public children activities and implementation children activities. The continuation based runtime program is partitioned into visibility spaces. The visibility spaces have boundaries based on implementation children activities. The continuation based runtime program is partially processes at a visibility space granularity. | 06-16-2011 |
20110145827 | MAINTAINING A COUNT FOR LOCK-FREE LINKED LIST STRUCTURES - The present invention extends to methods, systems, and computer program products for maintaining a count for lock-free stack access. A numeric value representative of the total count of nodes in a linked list is maintained at the head node for the linked list. Commands for pushing and popping nodes appropriately update the total count at a new head node when nodes are added to and removed from the linked list. Thus, determining the count of nodes in a linked list is an order 1 (or O(1)) operation, and remains constant even when the size of a linked list changes | 06-16-2011 |
20110145828 | STREAM DATA PROCESSING APPARATUS AND METHOD - A stream data processing apparatus creates a plurality of partition data on the basis of stream data, and distributes the partition data to a plurality of computers. Specifically, the stream data processing apparatus acquires from the stream data a data element group that is configured in the number of data elements based on the processing capability of the partition data destination computer, and decides an auxiliary data part of this data element group based on a predetermined value. The stream data processing apparatus creates partition data that include the acquired data element group and END data. The data element group is configured from the auxiliary data part and a result usage data part. | 06-16-2011 |
20110154343 | SYSTEM, METHOD, PROGRAM, AND CODE GNERATION UNIT - A system for parallel processing tasks by allocating the use of exclusive locks to process critical sections of a task. The system includes storing update information that is updated in response to acquisition and release of an exclusive lock. When processing a task which includes a critical section containing code affecting execution of the other task, an exclusive execution unit acquires an exclusive lock prior to processing the critical section. When the section has been processed successfully, the lock is released and update information updated. Meanwhile a second task, whose critical section does not contain code affecting execution of the other task may run in parallel, without acquiring an exclusive lock, via a nonexclusive execution unit. The nonexclusive execution unit determines that the second critical section has successfully completed if the update information has not changed during processing of the second critical section. | 06-23-2011 |
20110154344 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DEBUGGING A SYSTEM - A system, computer program and a method for debugging a system, the method includes: controlling, by a debugger, an execution flow of a processing entity; setting, by the debugger or the processing entity, a value of a scheduler control variable accessible by the scheduler; wherein the debugger is prevented from directly controlling an execution flow of a scheduler; and determining, by the scheduler, an execution flow of the scheduler in response to a value of the scheduler control variable. | 06-23-2011 |
20110161960 | PROGRESS-DRIVEN PROGRESS INFORMATION IN A SERVICE-ORIENTED ARCHITECTURE - A system may include reception of the first instruction, execution of the business process in a first software work process, reception, during execution of the business process, of an indication of a business object process associated with the business process, determination of progress information associated with the business process based on the indication of the business object process, and storage of the progress information within a memory. Aspects may further include reception, at a second work process, of a request from the client application for progress information, retrieval of the progress information from the shared memory and provision of the progress information to the client application. | 06-30-2011 |
20110161961 | METHOD AND APPARATUS FOR OPTIMIZED INFORMATION TRANSMISSION USING DEDICATED THREADS - An approach is provided for optimized information transmission using dedicated threads. A thread manager receives a request from a device for content information. The thread manager assigns the request to a worker thread for processing to generate the content information. The thread manager further determines whether the worker thread has completed the processing of the content information. The thread manager delegates the processed content information to a transmission thread based, at least in part, on the determination, wherein the transmission thread causes, at least in part, transfer of the processed content information. The thread manager releases the worker thread from the assigned request. | 06-30-2011 |
20110161962 | DATAFLOW COMPONENT SCHEDULING USING READER/WRITER SEMANTICS - The scheduling of dataflow components in a dataflow network. A number, if not all, of the dataflow components are created using a domain/agent model. A scheduler identifies, for a number of the components, a creation source for the given component. The scheduler also identifies an appropriate domain-level access permission (and potentially also an appropriate agent-level access permission) for the given component based on the creation source of the given component. Tokens may be used at the domain or agent level to control access. | 06-30-2011 |
20110161963 | METHODS, APPARATUSES, AND COMPUTER PROGRAM PRODUCTS FOR GENERATING A CYCLOSTATIONARY EXTENSION FOR SCHEDULING OF PERIODIC SOFTWARE TASKS - An apparatus for generating a cyclostationary extension for scheduling periodic software tasks may include a processor and a memory storing executable computer program code that causes the apparatus to at least perform operations including determining a time period including time periods associated with one or more radios. Each of the radios may include algorithms that are executable during respective time intervals of the time period. The computer program code may cause the apparatus to cyclically repeating each of the algorithms a number of times for the duration of the time period. In this regard, the algorithms may be executable a plurality of times during the time period. The computer program code may cause the apparatus to determine whether the algorithms are assignable to processors for execution during the respective time intervals based at least in part on a value. Corresponding computer program products and methods are also provided. | 06-30-2011 |
20110161964 | Utility-Optimized Scheduling of Time-Sensitive Tasks in a Resource-Constrained Environment - Systems and methods implementing utility-maximized scheduling of time-sensitive tasks in a resource constrained-environment are described herein. Some embodiments include a method for utility-optimized scheduling of computer system tasks performed by a processor of a first computer system that includes determining a time window including a candidate schedule of a new task to be executed on a second computer system, identifying other tasks scheduled to be executed on the second computer system within said time window, and identifying candidate schedules that each specifies the execution times for at least one of the tasks (which include the new task and the other tasks). The method further includes calculating an overall utility for each candidate schedule based upon a task utility calculated for each of the tasks when scheduled according to each corresponding candidate schedule and queuing the new task for execution according to a preferred schedule with the highest overall utility. | 06-30-2011 |
20110161965 | JOB ALLOCATION METHOD AND APPARATUS FOR A MULTI-CORE PROCESSOR - A method and apparatus for performing pipeline processing in a computing system having multiple cores, are provided. To pipeline process an application in parallel and in a time-sliced fashion, the application may be divided into two or more stages and executed stage by stage. A multi-core processor including multiple cores may collect correlation information between the stages and allocate additional jobs to the cores based on the collected information. | 06-30-2011 |
20110161966 | Controlling parallel execution of plural simulation programs - A non-transitory recording medium has a scheduler program embodied therein for controlling parallel execution of plural simulation programs, the scheduler program causing a computer to perform a parallel execution procedure by which the plural simulation programs are performed in parallel during a period in which there is no data exchange between the plural simulation programs, and a sequential execution procedure by which the plural simulation programs are sequentially performed during a period in which there is data exchange between the plural simulation programs. | 06-30-2011 |
20110161967 | INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING SAME, AND STORAGE MEDIUM - When an instruction about changing the job execution limit information is made, a policy server determines whether or not the changed job execution limit information indicates that the execution of the job by the job execution unit is not limited. When the changed job execution limit information indicates that the execution of the job is not limited and the setting is made such that the job history information for the job is recorded on the image processing apparatus, the policy server sets the changed job execution limit information to the image processing apparatus. | 06-30-2011 |
20110161968 | Performing Zone-Based Workload Scheduling According To Environmental Conditions - To perform zone-based workload scheduling according to environmental conditions in a system having electronic devices, indicators of cooling efficiencies of the electronic devices in corresponding zones are aggregated to form aggregated indicators for respective zones, where the zones include respective subsets of electronic devices. Workload is assigned to the electronic devices according to the aggregated indicators. | 06-30-2011 |
20110167423 | Intelligent Keying Center Workflow Optimization - A system and method for an intelligent keying center workflow optimization is disclosed. In accordance with one embodiment of the present disclosure, a method comprises receiving a plurality of work units and determining one or more item attributes associated with each of the work units. The method also includes selecting one of the plurality of work units to process. The method further includes determining one or more agent attributes associated with each of a plurality of agents. Additionally, the method includes selecting, with a workflow manager, an agent from the plurality of agents to process the selected work unit, based at least in part on the determined item attributes associated with each of the received work units and the determined one or more agent attributes. The method also includes transmitting the selected work unit to the selected agent. | 07-07-2011 |
20110167424 | INTEGRATED DIFFERENTIATED OPERATION THROTTLING - A method and system for throttling a plurality of operations of a plurality of applications that share a plurality of resources. A difference between observed and predicted workloads is computed. If the difference does not exceed a threshold, a multi-strategy finder operates in normal mode and applies a recursive greedy pruning process with a look-back and look-forward optimization to select actions for a final schedule of actions that improve the utility of a data storage system. If the difference exceeds the threshold, the multi-strategy finder operates in unexpected mode and applies a defensive action selection process to select actions for the final schedule. The selected actions are performed according to the final schedule and include throttling of a CPU, network, and/or storage. | 07-07-2011 |
20110167425 | INSTRUMENT-BASED DISTRIBUTED COMPUTING SYSTEMS - An instrument-based distributed computing system is disclosed that accelerates the measurement, analysis, verification and validation of data in a distributed computing environment. A large computing work can be performed in a distributed fashion using the instrument-based distributed system. The instrument-based distributed system may include a client that creates a job. The job may include one or more tasks. The client may distribute a portion of the job to one or more remote workers on a network. The client may reside in an instrument. One or more workers may also reside in instruments. The workers execute the received portion of the job and may return execution results to the client. As such, the present invention allows the use of instrument-based distributed system on a network to conduct the job and facilitate decreasing the time for executing the job. | 07-07-2011 |
20110167426 | SMART SCHEDULER - A smart scheduler is provided to prepare a machine for a job, wherein the job has specific requirements, i.e., dimensions. One or more config jobs are identified to configure the machine to meet the dimensions of the job. Information concerning the machine's original configuration and groupings of config jobs that change the machine's configuration are cached in a central storage. The smart scheduler uses information in the central storage to identify a suitable machine and one or more config jobs to configure the machine to meet the dimensions of a job. The smart scheduler schedules a run for the config jobs on the machine. | 07-07-2011 |
20110173620 | Execution Context Control - A system and method for controlling the execution of notifications in a computer system with multiple notification contexts. A RunOn operator enables context hopping between notification contexts. Push-based stream operators optionally perform error checking to determine if notifications combined into a push-based stream share a common notification context. Context boxes group together notification creators and associate their notifications with a common scheduler and notification context. Operators employ a composition architecture, in which they receive one or more push-based streams and produce a transformed push-based stream that may be further operated upon. Components may be used in combinations to implement various policies, including a strict policy in which all notifications are scheduled in a common execution context, a permissive policy that provides programming flexibility, and a hybrid policy that combines flexibility with error checking. | 07-14-2011 |
20110173621 | PUSH-BASED OPERATORS FOR PROCESSING OF PUSH-BASED NOTIFICATIONS - A library of operators is provided for performing operations on push-based streams. The library may be implemented in a computing device. The library may be stored on a tangible machine-readable medium and may include instructions to be executed by one or more processors of a computing device. The library of operators may include groups of operators for performing various types of operations regarding push-based streams. The groups of operators may include, but not be limited to, standard sequence operators, other sequence operators, time operators, push-based operators, asynchronous operators, exception operators, functional operators, context operators, and event-specific operators. | 07-14-2011 |
20110173622 | System and method for dynamic task migration on multiprocessor system - A multiprocessor system and a migration method of the multiprocessor system are provided. The multiprocessor system may process dynamic data and static data of a task to be operated in another memory or another processor without converting pointers, in a distributed memory environment and in a multiprocessor environment having a local memory, so that dynamic task migration may be realized. | 07-14-2011 |
20110173623 | DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, STORAGE MEDIUM, AND DATA PROCESSING SYSTEM - A data processing apparatus that makes it possible for a user of a data processing apparatus to recognize whether or not descriptive contents of process definition tickets are executable on the data processing apparatus. Process definition tickets in which sequential processing flows for realizing functions are described are obtained, and it is determined whether or not the descriptive contents of the process definition tickets are executable on the data processing apparatus. A list of the process definition tickets whose descriptive contents have been determined as being executable on the data processing apparatus as a result of the determination is displayed in a manner being identifiable by the user. The user selects the process definition ticket whose descriptive contents are executable on the data processing apparatus from the list of the displayed process definition tickets, and the selection is received. The descriptive contents of the received process definition ticket are executed. | 07-14-2011 |
20110173624 | Process Integrated Mechanism Apparatus and Program - A method and apparatus for controlling and coordinating a multi-component system. Each component in the system contains a computing device. Each computing device is controlled by software running on the computing device. A first portion of the software resident on each computing device is used to control operations needed to coordinate the activities of all the components in the system. This first portion is known as a “coordinating process.” A second portion of the software resident on each computing device is used to control local processes (local activities) specific to that component. Each component in the system is capable of hosting and running the coordinating process. The coordinating process continually cycles from component to component while it is running. The continuous cycling of the coordinating process presents the programmer with a virtual machine in which there is a single coordinating process operating with a global view although, in fact, the data and computation remain distributed across every component in the system. | 07-14-2011 |
20110179420 | Computer System and Method of Operation Thereof - Within a computer system typically any server process will serve client requests to actually use the system resource to which the server process relates, e.g. a file server process will relate to the file system stored typically on a hard disk or the like and provide read/write access thereto, or to respond to a client request for information of one or more properties of the system resource. For example, a client may request a file server process to report back the amount of spare capacity in the file storage system. However, if no client processes are currently requesting actual use of the resource, then there will be no changes in the system resource which will require notification in any event. Therefore, in the case that all the client programs connected to the server are connected for notification services only, rather than access services, then the server will not in fact be used, and moreover no changes to the system resource will need to be notified (because there has been no use of the resource to cause any changes). In this case, therefore, the access server can be unloaded from main or higher memory, thus providing savings in memory and CPU execution cycles. | 07-21-2011 |
20110179421 | ENERGY EFFICIENT INTER-SUBSYSTEM COMMUNICATION - Control of communication in a data communication system of at least two subsystems is presented. Scheduling transfer of data is performed from a transmitting subsystem to a receiving subsystem. The scheduling comprises determining at least one of a plurality of transfer conditions including a level of activity of each subsystem, a point in time when each subsystem is scheduled to be active, a time limit for receiving data, in the receiving subsystem, an amount of data the receiving subsystem need, and a maximum amount of outstanding data in transfer between said subsystem. In dependence on at least the determined transferring conditions is transferring of data from the transmitting subsystem to the receiving subsystem, the transfer being subject to a delay that depends on the determined at least one transfer condition. | 07-21-2011 |
20110185361 | Interdependent Task Management - An illustrative embodiment of a computer-implemented process for interdependent task management selects a task from an execution task dependency chain to form a selected task, wherein a type selected from a set of types including “forAll,” “runOnce” and none is associated with the selected task and determines whether there is a “forAll” task. Responsive to a determination that there is no “forAll” task, determines whether there is a “runOnce” task and responsive to a determination that there is a “runOnce” task further determines whether there is a semaphore for the selected task. Responsive to a determination that there is a semaphore for the selected task, the computer-implemented process determines whether the semaphore is “on” for the selected task and responsive to a determination that the semaphore is “on,” sets the semaphore “off” and executes the selected task. | 07-28-2011 |
20110191775 | ARRAY-BASED THREAD COUNTDOWN - The forking of thread operations. At runtime, a task is identified as being divided into multiple subtasks to be accomplished by multiple threads (i.e., forked threads). In order to be able to verify when the forked threads have completed their task, multiple counter memory locations are set up and updated as forked threads complete. The multiple counter memory locations are evaluated in the aggregate to determine whether all of the forked threads are completed. Once the forked threads are determined to be completed, a join operation may be performed. Rather than a single memory location, multiple memory locations are used to account for thread completion. This reduces risk of thread contention. | 08-04-2011 |
20110191776 | LOW OVERHEAD DYNAMIC THERMAL MANAGEMENT IN MANY-CORE CLUSTER ARCHITECTURE - A semiconductor chip includes a plurality of multi-core clusters each including a plurality of cores and a cluster controller unit. Each cluster controller unit is configured to control thread assignment within the multi-core cluster to which it belongs. The cluster controller unit monitors various parameters measured in the plurality of cores within the multi-core cluster to estimate the computational demand of each thread that runs in the cores. The cluster controller unit may reassign the threads within the multi-core cluster based on the estimated computational demand of the threads and transmit a signal to an upper-level software manager that controls the thread assignment across the semiconductor chip. When an acceptable solution to thread assignment cannot be achieved by shuffling of threads within the multi-core cluster, the cluster controller unit may also report inability to solve thread assignment to the upper-level software manager to request a system level solution. | 08-04-2011 |
20110191777 | Method and Apparatus for Scheduling Data Backups - An apparatus and computer-executed method for scheduling data backups may include accessing a specification for a backup job. The specification may include an identification of a data source, a start time and a target storage device to which backup data should be written. A first history of past backup jobs that specify the data source, and a second history of past backup jobs that specify the target storage device, may be identified. Using the first history, an expected size of the backup data may be computed. Using the second history, an expected rate at which the backup data may be written to the target storage device may be computed. Using the expected size, the expected rate and the start time, an expected completion time for the backup job may be computed. | 08-04-2011 |
20110191778 | COMPUTER PROGRAM, METHOD, AND APPARATUS FOR GROUPING TASKS INTO SERIES - In an apparatus for generating a series, a task discrimination unit identifies a task executed on a first device and tasks executed on a second device, on the basis of messages exchanged between those devices. A memory stores models defining caller-callee relationships between caller tasks on the first device and callee tasks on the second device. A series grouping unit produces a series of tasks from a callee-eligible sequence of tasks executed on the second device during a processing time of the identified task on the first device. The series grouping unit achieves this by selecting one of the models that defines the identified task on the first device as a caller task and extracting a portion of the callee-eligible sequence that matches at least in part with the callee tasks defined in the selected model while excluding therefrom the tasks that cannot be the callee tasks. | 08-04-2011 |
20110191779 | RECORDING MEDIUM STORING THEREIN JOB SCHEDULING PROGRAM, JOB SCHEDULING APPARATUS, AND JOB SCHEDULING METHOD - A job scheduling apparatus determines an assignment order, which is the order in which jobs are assigned to a computational resource, on the basis of priority levels and being associated with the assignment order. The apparatus assigns the jobs to the computational resource on the basis of the assignment order. The apparatus reduces the priority levels for the jobs that have been assigned to the computational resource. The apparatus increases the priority levels with time. Regarding a priority level among the priority levels, if, at a future time, which is a fixed time period from the start of execution of the jobs, an amount of an increase in the priority level is expected to be equal to or larger than an amount of a reduction in the priority level for a job, assignment of the job to the computational resource is executed. | 08-04-2011 |
20110191780 | METHOD AND APPARATUS FOR DECOMPOSING I/O TASKS IN A RAID SYSTEM - A data access request to a file system is decomposed into a plurality of lower-level I/O tasks. A logical combination of physical storage components is represented as a hierarchical set of objects. A parent I/O task is generated from a first object in response to the data access request. A child I/O task is generated from a second object to implement a portion of the parent I/O task. The parent I/O task is suspended until the child I/O task completes. The child I/O task is executed in response to an occurrence of an event that a resource required by the child I/O task is available. The parent I/O task is resumed upon an event indicating completion of the child I/O task. Scheduling of any child I/O task is not conditional on execution of the parent I/O task, and a state diagram regulates the child I/O tasks. | 08-04-2011 |
20110197195 | THREAD MIGRATION TO IMPROVE POWER EFFICIENCY IN A PARALLEL PROCESSING ENVIRONMENT - A method and system to selectively move one or more of a plurality threads which are executing in parallel by a plurality of processing cores. In one embodiment, a thread may be moved from executing in one of the plurality of processing cores to executing in another of the plurality of processing cores, the moving based on a performance characteristic associated with the plurality of threads. In another embodiment of the invention, a power state of the plurality of processing cores may be changed to improve a power efficiency associated with the executing of the multiple threads. | 08-11-2011 |
20110209152 | METHOD AND SYSTEM FOR SCHEDULING PERIODIC PROCESSES - A method of scheduling periodic processes for execution in an electronic system, in particular in a network, in a data processor or in a communication device, wherein the electronic system includes a controller for performing the scheduling, wherein a number of N processes P | 08-25-2011 |
20110209153 | SCHEDULE DECISION DEVICE, PARALLEL EXECUTION DEVICE, SCHEDULE DECISION METHOD, AND PROGRAM - A schedule decision method acquires dependencies of execution sequences required for a plurality of sub tasks into which a first task has been divided; generates a plurality of sub task structure candidates that satisfy said dependencies and for which a plurality of processing devices execute said plurality of sub tasks; generates a plurality of schedule candidates by further assigning at least one second task to each of said sub task structure candidates; computes an effective degree that represents effectiveness of executions of said first task and said second task for each of said plurality of schedule candidates; and decides a schedule candidate used for the executions of said first task and said second task from said plurality of schedule candidates based on said effective degrees. | 08-25-2011 |
20110214127 | Strongly-Ordered Processor with Early Store Retirement - In one embodiment, a processor comprises a retire unit and a load/store unit coupled thereto. The retire unit is configured to retire a first store memory operation responsive to the first store memory operation having been processed at least to a pipeline stage at which exceptions are reported for the first store memory operation. The load/store unit comprises a queue having a first entry assigned to the first store memory operation. The load/store unit is configured to retain the first store memory operation in the first entry subsequent to retirement of the first store memory operation if the first store memory operation is not complete. The queue may have multiple entries, and more than one store may be retained in the queue after being retired by the retire unit. | 09-01-2011 |
20110214128 | ONE-TIME INITIALIZATION - Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item. | 09-01-2011 |
20110219377 | DYNAMIC THREAD POOL MANAGEMENT - Dynamically managing a thread pool associated with a plurality of sub-applications. A request for at least one of the sub-applications is received. A quantity of threads currently assigned to the at least one of the sub-applications is determined. The determined quantity of threads is compared to a predefined maximum thread threshold. A thread in the thread pool is assigned to handle the received request if the determined quantity of threads is not greater than the predefined maximum thread threshold. Embodiments enable control of the quantity of threads within the thread pool assigned to each of the sub-applications. Further embodiments manage the threads for the sub-applications based on latency of the sub-applications. | 09-08-2011 |
20110219378 | ITERATIVE DATA PARALLEL OPPORTUNISTIC WORK STEALING SCHEDULER - The scheduling of a group of work units across multiple computerized worker processes. A group of work units is defined and assigned to a first worker. The worker uses the definition of the group of work units to determine when processing is completed on the group of work units. Stealing workers may steal work from the first worker, and steal from the group of work initially assigned to the first worker, by altering the definition of the group of work units assigned to the first worker. The altered definition results in the first worker never completing a subset of the work units original assigned to the first worker, thereby allowing the stealing worker to complete work on that subset of work units. The process may perhaps be performed recursively in that the stealing worker may have some of its work stolen in the same way. | 09-08-2011 |
20110219379 | ONE-TIME INITIALIZATION - Aspects of the present invention are directed at providing safe and efficient ways for a program to perform a one-time initialization of a data item in a multi-threaded environment. In accordance with one embodiment, a method is provided that allows a program to perform a synchronized initialization of a data item that may be accessed by multiple threads. More specifically, the method includes receiving a request to initialize the data item from a current thread. In response to receiving the request, the method determines whether the current thread is the first thread to attempt to initialize the data item. If the current thread is the first thread to attempt to initialize the data item, the method enforces mutual exclusion and blocks other attempts to initialize the data item made by concurrent threads. Then, the current thread is allowed to execute program code provided by the program to initialize the data item. | 09-08-2011 |
20110225587 | DUAL MODE READER WRITER LOCK - A method, system, and computer usable program product for a dual mode reader writer lock. A contention condition is determined in using an original lock. The original lock manages read and write access to a resource by several processes executing in the data processing system. The embodiment creates a set of expanded locks for use in conjunction with the original lock. The original lock and the set of expanded locks forming the dual mode reader writer lock, which operates to manage the read and write access to the resource. Using an index within the original lock, each expanded lock is indexed such that each expanded lock is locatable using the index. The contention condition is resolved by distributing requests for acquiring and releasing the read access and write access to the resource by the several processes across the original lock and the set of expanded locks. | 09-15-2011 |
20110225588 | REDUCING DATA READ LATENCY IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE - Described embodiments provide address translation for data stored in at least one shared memory of a network processor. A processing module of the network processor generates tasks corresponding to each of a plurality of received packets. A packet classifier generates contexts for each task, each context associated with a thread of instructions to apply to the corresponding packet. A first subset of instructions is stored in a tree memory within the at least one shared memory. A second subset of instructions is stored in a cache within a multi-thread engine of the packet classifier. The multi-thread engine maintains status indicators corresponding to the first and second subsets of instructions within the cache and the tree memory and, based on the status indicators, accesses a lookup table while processing a thread to translate between an instruction number and a physical address of the instruction in the first and second subset of instructions. | 09-15-2011 |
20110225589 | EXCEPTION DETECTION AND THREAD RESCHEDULING IN A MULTI-CORE, MULTI-THREAD NETWORK PROCESSOR - Described embodiments provide a packet classifier of a network processor having a plurality of processing modules. A scheduler generates a thread of contexts for each tasks generated by the network processor corresponding to each received packet. The thread corresponds to an order of instructions applied to the corresponding packet. A multi-thread instruction engine processes the threads of instructions. A function bus interface inspects instructions received from the multi-thread instruction engine for one or more exception conditions. If the function bus interface detects an exception, the function bus interface reports the exception to the scheduler and the multi-thread instruction engine. The scheduler reschedules the thread corresponding to the instruction having the exception for processing in the multi-thread instruction engine. Otherwise, the function bus interface provides the instruction to a corresponding destination processing module of the network processor. | 09-15-2011 |
20110231849 | Optimizing Workflow Engines - Techniques for implementing a workflow are provided. The techniques include merging a workflow to create a virtual graph, wherein the workflow comprises two or more directed acyclic graphs (DAGs), mapping each of one or more nodes of the virtual graph to one or more physical nodes, and using a message passing scheme to implement a computation via the one or more physical nodes. | 09-22-2011 |
20110231850 | BLOCK-BASED TRANSMISSION SCHEDULING METHODS AND SYSTEMS - Block-based transmission scheduling methods and systems are provided. First, a plurality of packets corresponding to at least one data flow is received. The packets of the data flow are accumulated to form a data block. Then, the data block of the data flow is scheduled and transmitted according to a transmission scheduling algorithm based on the unit of block. In some embodiments, when the length of the accumulated data block equals to or is greater than a predefined or dynamically calculated block length threshold, the data block is scheduled and transmitted according to the transmission scheduling algorithm. In some embodiments, when current time is equal to a specific time point derived from a dynamically calculated or a fixed time duration, the data block is scheduled and transmitted according to the transmission scheduling algorithm. | 09-22-2011 |
20110231851 | ROLE-BASED MODERNIZATION OF LEGACY APPLICATIONS - Methods, systems, and techniques for role-based modernization of legacy applications are provided. Example embodiments provide a Role-Based Modernization System (“RBMS”), which enables the reorganization of (menu-based) legacy applications by role as a method of modernization and enables user access to such modernized applications through roles. In addition the RBMS supports the ability to enhance such legacy applications by blending them with non-legacy tasks and functions in a user-transparent fashion. In one embodiment, the RBMS comprises a client-side javascript display and control module and a java applet host interface and a server-side emulation control services module. These components cooperate to uniformly present legacy and non-legacy tasks that have been reorganized according to role modernization techniques. | 09-22-2011 |
20110231852 | METHOD AND SYSTEM FOR SCHEDULING MEDIA EXPORTS - Methods, systems and software components are described for exporting media in a library according to a schedule. At a first time, a user provides and a system receives export identification data including data identifying one or more media from the library to be exported and data identifying a second time at which the one or more media is scheduled to be exported. The first data may be a list of media identified by media identifiers and related data or may be a set of one or more criteria which are evaluated to determine which media in the library should be exported at the scheduled time. The export identification data is stored in a relational database table. At the second, scheduled time, the stored export identification data is used to select the one or more media to be exported to export the selected media from the library. | 09-22-2011 |
20110239218 | METHOD AND SYSTEM OF LAZY OUT-OF-ORDER SCHEDULING - A method and system to schedule out of order operations without the requirement to execute compare, ready and pick logic in a single cycle. A lazy out-of-order scheduler splits each scheduling loop into two consecutive cycles. The scheduling loop includes a compare stage, a ready stage and a pick stage. The compare stage and the ready stage are executed in a first of the two consecutive cycles and the pick stage is executed in a second of the two consecutive cycles. By splitting each scheduling loop into two consecutive cycles, selecting the oldest operation by default and checking the readiness of the oldest operation, it relieves the system of timing requirements and avoids the need for power hungry logic. Every execution of an operation does not appear as one extra cycle longer and the lazy out-of-order scheduler retains most of the performance of a full out-of-order scheduler. | 09-29-2011 |
20110246994 | SCHEDULING HETEROGENEOUS PARTITIONED RESOURCES WITH SHARING CONSTRAINTS - A system and method that provides an automated solution to obtaining quality scheduling for users of computing resources. The system, implemented in an enterprise software test center, collects information from test-shop personnel about test machine features and availability, test jobs, and tester preferences and constraints. The system reformulates this testing information as a system of constraints. An optimizing scheduling engine computes efficient schedules whereby all the jobs are feasibly scheduled while satisfying the users' time preferences to the greatest extent possible. The method and system achieves fairness: if all preferences can not be meet, it is attempted to evenly distribute violations of preferences across the users. The test scheduling is generated according to a first application of a greedy algorithm that finds an initial feasible assignment of jobs. The second is a local search algorithm that improves the initial greedy solution. | 10-06-2011 |
20110252427 | MODELING AND SCHEDULING ASYNCHRONOUS INCREMENTAL WORKFLOWS - Disclosed are methods and apparatus for scheduling an asynchronous workflow having a plurality of processing paths. In one embodiment, one or more predefined constraint metrics that constrain temporal asynchrony for one or more portions of the workflow may be received or provided. Input data is periodically received or intermediate or output data is generated for one or more of the processing paths of the workflow, via one or more operators, based on a scheduler process. One or more of the processing paths for generating the intermediate or output data are dynamically selected based on received input data or generated intermediate or output data and the one or more constraint metrics. The selected one or more processing paths of the workflow are then executed so that each selected processing path generates intermediate or output data for the workflow. | 10-13-2011 |
20110258631 | MANAGEMENT APPARATUS FOR MANAGING NETWORK DEVICES, CONTROL METHOD THEREOF, AND RECORDING MEDIUM - A control method including acquiring and storing, when generating a task in which an object and a network device to which to transmit the object are set, information about the object to be processed in the task; detecting, when executing the task, whether information about the object to be processed in the task is changed from the information about the object stored when the task is generated, according to a setting of the task or the object to be processed in the task; cancelling, when it is detected that there is a change in the information about the object, execution of the task; and transmitting, when it is detected that there is no change in the information about the object, the object processed in the task by executing the task. | 10-20-2011 |
20110265086 | USER AND DEVICE LOCALIZATION USING PROBABILISTIC DEVICE LOG TRILATERATION - A system and method of localizing elements (shared devices and/or their users) in a device infrastructure, such as a printing network, are provided. The method includes mapping a structure in which the elements of a device infrastructure are located, the elements comprising shared devices and users of the shared devices. Probable locations of fewer than all of the elements in the structure are mapped, with at least some of the elements being initially assigned to an unknown location. Usage logs for a plurality of the shared devices are acquired. The acquired usage log for each device includes a user identifier for each of a set of uses of the device, each of the uses being initiated from a respective location within the mapped structure by one of the users. Based on the acquired usage logs and the input probable locations of some of the elements, locations of at least some of the elements initially assigned to an unknown location are predicted. The prediction is based a model which infers that for each of a plurality of the users, a usage of at least some of the shared devices by the user is a function of respective distances between the user and each of those devices. | 10-27-2011 |
20110265087 | Apparatus, method, and computer program product for solution provisioning - In one embodiment, an apparatus for solution provisioning includes a task manager configured to, establish a provisioning task and obtain a provisioning image for the provisioning task in response to a request, and a provisioning implementer configured to execute and monitor the provisioning task established by the task manager. The task manager configures and launches the provisioning implementer based on the provisioning image obtained, and the provisioning image includes configuration information and scripts used for executing installation, and information for mapping the configuration information to the scripts. In another embodiment, a method includes establishing a provisioning task in response to a received solution provisioning request, obtaining a provisioning image for the provisioning task, configuring and launching a provisioning implementer based on the obtained provisioning image, and executing and monitoring the provisioning task using the provisioning implementer. Other systems, methods, and computer program products are described according to other embodiments. | 10-27-2011 |
20110265088 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DYNAMICALLY INCREASING RESOURCES UTILIZED FOR PROCESSING TASKS - Mechanisms and methods are provided for dynamically increasing resources utilized for processing tasks. These mechanisms and methods for dynamically increasing resources utilized for processing tasks can enable embodiments to adjust processing power utilized for task processing. Further, adjusting processing power can ensure that quality of service goals set for processing tasks are achieved. | 10-27-2011 |
20110271283 | ENERGY-AWARE JOB SCHEDULING FOR CLUSTER ENVIRONMENTS - A job scheduler can select a processor core operating frequency for a node in a cluster to perform a job based on energy usage and performance data. After a job request is received, an energy aware job scheduler accesses data that specifies energy usage and job performance metrics that correspond to the requested job and a plurality of processor core operating frequencies. A first of the plurality of processor core operating frequencies is selected that satisfies an energy usage criterion for performing the job based, at least in part, on the data that specifies energy usage and job performance metrics that correspond to the job. The job is assigned to be performed by a node in the cluster at the selected first of the plurality of processor core operating frequencies. | 11-03-2011 |
20110271284 | Method of Simulating, Testing, and Debugging Concurrent Software Applications - Embodiments of a method of simulating, testing, and debugging of concurrent software applications are disclosed. Software code is executed by a simulator program that takes over some functions of an operating system. The simulator program according to various embodiments is capable of controlling thread spawning, preemption, operating system calls, interprocess communications, signals. Notable advantages of the invention are its capability of testing uninstrumented user applications, independence of the high-level computer language of a user application, and machine instruction level granularity. The simulator is capable of obtaining outcomes of reproducible execution sequences, reproducing faulty behavior, and providing debugging information to a user. | 11-03-2011 |
20110276968 | EVENT DRIVEN CHANGE INJECTION AND DYNAMIC EXTENSIONS TO A BPEL PROCESS - An extensible process design provides an ability to dynamically inject changes into a running process instance, such as a BPEL instance. Using a combination of BPEL, rules and events, processes can be designed to allow flexibility in terms of adding new activities, removing or skipping activities and adding dependent activities. These changes do not require redeployment of the orchestration process and can affect the behavior of in-flight process instances. The extensible process design includes a main orchestration process, a set of task execution processes and a set of generic trigger processes. The design also includes a set of rules evaluated during execution of the tasks of the orchestration process. The design can further include three types of events: an initiate process event, a pre-task execution event and a post-task execution event. These events and rules can be used to alter the behavior of the main orchestration process at runtime. | 11-10-2011 |
20110276969 | LOCK REMOVAL FOR CONCURRENT PROGRAMS - A system and method are disclosed for removing locks from a concurrent program. A set of behaviors associated with a concurrent program are modeled as causality constraints. The causality constraints which preserve the behaviors of the concurrent program are identified. Having identified the behavior preserving causality constraints, the corresponding lock and unlock statements in the concurrent program are identified which enforce the identified causality constraints. All identified lock and unlock statements are retained, while all other lock and unlock statements are discarded. | 11-10-2011 |
20110276970 | PROGRAMMER PRODUCTIVITY THOUGH A HIGH LEVEL LANGUAGE FOR GAME DEVELOPMENT - An object of the present invention is to provide a system and a method for a C++ based extension of the parallel VSIPL++ API that consists of a basis of game engine related operations. The invention relates to a system | 11-10-2011 |
20110276971 | EXTENDING OPERATIONS OF AN APPLICATION IN A DATA PROCESSING SYSTEM - A method, an apparatus, and computer instructions are provided for extending operations of an application in a data processing system. A primary operation is executed. All extended operations of the primary operation are cached and pre and post operation identifiers are identified. For each pre operation identifier, a pre operation instance is created and executed. For each post operation identifier, a post operation instance is created and executed. | 11-10-2011 |
20110283284 | DISTRIBUTED BUSINESS PROCESS MANAGEMENT SYSTEM WITH LOCAL RESOURCE UTILIZATION - Systems and methods consistent with the invention may include providing an instance of business process management suite in a sandbox of a web browser. The instance of the business process management suite may be based on an archive received from a web server. The business process management suite may be controlled using a graphical user interface in a browser. Providing a business process management suite may further include creating an instance of a database management system in the sandbox. The instance of the database management system may further store its data in the local memory of a client device. | 11-17-2011 |
20110283285 | Real Time Mission Planning - The different advantageous embodiments provide a system comprising a number of computers, a graphical user interface, first program code stored on the computer, and second program code stored on the computer. The graphical user interface is executed by a computer in the number of computers. The computer is configured to run the first program code to define a mission using a number of mission elements. The computer is configured to run the second program code to generate instructions for a number of assets to execute the mission and monitor the number of assets during execution of the mission. | 11-17-2011 |
20110289503 | EXTENSIBLE TASK SCHEDULER - A parallel execution runtime allows tasks to be executed concurrently in a runtime environment. The parallel execution runtime delegates the implementation of task queuing, dispatch, and thread management to one or more plug-in schedulers in a runtime environment of a computer system. The plug-in schedulers may be provided by user code or other suitable sources and include interfaces that operate in conjunction with the runtime. The runtime tracks the schedulers and maintains control of all aspects of the execution of tasks from user code including task initialization, task status, task waiting, task cancellation, task continuations, and task exception handling. | 11-24-2011 |
20110289504 | IMAGE PROCESSING APPARATUS - When a user inputs an image addition instruction from a UI input unit, a job registration unit registers a job corresponding to the instruction in a job list for each type of processing. When undo is input from the UI input unit, a current position pointer prepared for each type of processing returns to the immediately preceding job. When redo is input, the current position pointer moves to the immediately succeeding job. When a processing execution instruction is input, out of jobs registered in the job list, the job indicated by the current position pointer and preceding jobs are executed in a predetermined order. | 11-24-2011 |
20110289505 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM - The function restriction information of a designated flow executor is acquired. The acquired function restriction information is analyzed. An operation screen that identifiably displays process contents executable by the flow executor in association with setting target functions to be set in the flow is displayed on the basis of the analyzed function restriction information. Process contents of a setting target function to be set in the flow are selected on the basis of an operation in the operation screen. The flow of the flow executor is generated by combining the functions of the selected process contents. | 11-24-2011 |
20110296420 | METHOD AND SYSTEM FOR ANALYZING THE PERFORMANCE OF MULTI-THREADED APPLICATIONS - A method and system to provide an analysis model to determine the specific problem(s) of a multi-threaded application. In one embodiment of the invention, the multi-thread application uses a plurality of threads for execution and each thread is assigned to a respective one of a plurality of states based on a current state of each thread. By doing so, the specific problem(s) of the multi-threaded application is determined based on the number of transitions among the plurality of states for each thread. In one embodiment of the invention, the analysis model uses worker threads transition counters or events to determine for each parallel region or algorithm of the multi-threaded application which problem has happened and how much it has affected the scalability of the parallel region or algorithm. | 12-01-2011 |
20110296421 | METHOD AND APPARATUS FOR EFFICIENT INTER-THREAD SYNCHRONIZATION FOR HELPER THREADS - A monitor bit per hardware thread in a memory location may be allocated, in a multiprocessing computer system having a plurality of hardware threads, the plurality of hardware threads sharing the memory location, and each of the allocated monitor bit corresponding to one of the plurality of hardware threads. A condition bit may be allocated for each of the plurality of hardware threads, the condition bit being allocated in each context of the plurality of hardware threads. In response to detecting the memory location being accessed, it is determined whether a monitor bit corresponding to a hardware thread in the memory location is set. In response to determining that the monitor bit corresponding to a hardware thread is set in the memory location, a condition bit corresponding to a thread accessing the memory location is set in the hardware thread's context. | 12-01-2011 |
20110296422 | Switch-Aware Parallel File System - Embodiments of the invention related to a switch-aware parallel file system. A computing cluster is partitioned into a plurality of computing cluster building blocks comprising a parallel file system. Each computing cluster building block comprises a file system client, a storage module, a building block metadata module, and a building block network switch. The building block metadata module tracks a storage location of data allocated by the storage module within the computing cluster building block. The computing cluster further comprises a file system metadata module that tracks which of the plurality of computing cluster building blocks data is allocated among within the parallel file system. The computing cluster further comprises a file system network switch to provide the parallel file system with access to each of the plurality of computing cluster building blocks and the file system metadata module. At least one additional computing cluster building block is added to the computing cluster, if resource utilization of the computing cluster exceeds a pre-determined threshold. | 12-01-2011 |
20110296423 | FRAMEWORK FOR SCHEDULING MULTICORE PROCESSORS - A method, system, and computer usable program product for a framework for scheduling tasks in a multi-core processor or multiprocessor system are provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor. | 12-01-2011 |
20110296424 | Synthesis of Memory Barriers - A framework is provided for automatic inference of memory fences in concurrent programs. A method is provided for generating a set of ordering constraints that prevent executions of a program violating a specification. One or more incoming avoidable transitions are identified for a state and one or more ordering constraints are refined for the state. The set of ordering constraints are generated by taking a conjunction of ordering constraints for all states that violate the specification. One or more fence locations can optionally be selected based on the generated set of ordering constraints. | 12-01-2011 |
20110296425 | Management apparatus, management system, and recording medium for recording management program - A management apparatus includes a job definition information storage section for storing, at each period, a job definition file including execution characteristics indicating the start condition of each job in the execution schedule, the estimated execution time of the job, the state of the job, and a restriction at the time of setting the start schedule of the job; an exclusion information storage section for storing exclusion definition information indicating the jobs to be executed exclusively from each other; a reset job specifying section for acquiring a first job definition file of a schedule to be executed and a second job definition file of an executed schedule, then extracting, as a reset job, an abnormally terminated job from the second job definition file, and extracting a job using the reset job and the issue message of the reset job as start conditions, to store the extracted jobs in a related job set table; an execution possible time zone calculating section for searching, as an execution possible time zone, a time zone enabling execution of the job stored in the related job set table, from the first job definition file based on the second job definition file and the exclusion definition information; a start schedule adjusting section for setting the start schedule of the job stored in the related job set table based on the execution possible time zone; and a start schedule time setting section for setting the start time of the job set in the first job definition file, based on the start schedule of the job stored in the related job set table. | 12-01-2011 |
20110296426 | METHOD AND APPARATUS HAVING RESISTANCE TO FORCED TERMINATION ATTACK ON MONITORING PROGRAM FOR MONITORING A PREDETERMINED RESOURCE - Exemplary embodiments include a method and system having resistance to a forced termination attack on a monitoring program for monitoring a predetermined resource. Aspects of the exemplary embodiment include a device that executes a predetermined process including a monitoring program that monitors a predetermined resource, wherein the predetermined process is a process for which the predetermined resource becomes unavailable in response to termination of the predetermined process; a program starting unit for starting the monitoring program in response to an execution of the predetermined process; and a terminator for terminating the predetermined process in the case where the monitoring program is forcibly terminated from the outside. | 12-01-2011 |
20110302582 | TASK ASSIGNMENT ON HETEROGENEOUS THREE-DIMENSIONAL/STACKED MICROARCHITECTURES - A method of enhancing performance of a three-dimensional microarchitecture includes determining a computational demand for performing a task, selecting an optimization criteria for the task, identifying at least one computational resource of the microarchitecture configured to meet the computational demand for performing the task, and calculating an evaluation criteria for the at least one computational resource based on the computational demand for performing the task. The evaluation criteria defines an ability of the computational resource to meet the optimization criteria. The method also includes assigning the task to the computational resource based on the evaluation criteria of the computational resource in order to proactively avoid creating a hot spot on the three-dimensional microarchitecture. | 12-08-2011 |
20110302583 | SYSTEMS AND METHODS FOR PROCESSING DATA - A system, method, and computer program product for processing data are disclosed. The system includes a data processing framework configured to receive a data processing task for processing, a plurality of database systems coupled to the data processing framework, wherein the database systems are configured to perform a data processing task, and a storage component in communication with the data processing framework and the plurality database systems, configured to store information about each partition of the data processing task being processed by each database system and the data processing framework. The data processing task is configured to be partitioned into a plurality of partitions and each database system is configured to process a partition of the data processing task assigned for processing to that database system. Each database system is configured to perform processing of its assigned partition of the data processing task in parallel with another database system processing another partition of the data processing task assigned to the another database system. The data processing framework is configured to perform at least one partition of the data processing task. | 12-08-2011 |
20110302584 | SYNTHESIS OF CONCURRENT SCHEDULERS FOR MULTICORE ARCHITECTURES - Systems and methods provide a high-level language for generation of a scheduling specification based on a scheduling policy, and synthesis of scheduler based on the scheduling specification. The systems and methods can permit the use of more sophisticated scheduling strategies than those afforded by conventional systems, without requiring the programmer to write explicitly parallel code. In certain embodiments, synthesis of the scheduler includes implementation of at least one rule related to the scheduling specification through definition of one or more workset objects that are concurrent, a workset object of the one or more workset objects having an addition method, a first poll method, and a second poll method. Such poll methods extend the operability of sequential poll methods. The one or more worksets satisfy a condition for correctness that is less stringent than conventional conditions for correctness. | 12-08-2011 |
20110302585 | Techniques for Providing Improved Affinity Scheduling in a Multiprocessor Computer System - Techniques for controlling a thread on a computerized system having multiple processors involve accessing state information of a blocked thread, and maintaining the state information of the blocked thread at current values when the state information indicates that less than a predetermined amount of time has elapsed since the blocked thread ran on the computerized system. Such techniques further involve setting the state information of the blocked thread to identify affinity for a particular processor of the multiple processors when the state information indicates that at least the predetermined amount of time has elapsed since the blocked thread ran on the computerized system. Such operation enables the system to place a cold blocked thread which shares data with another thread on the same processor of that other thread so that, when the blocked thread awakens and runs, that thread is closer to the shared data. | 12-08-2011 |
20110307894 | Redundant Multithreading Processor - A redundant multithreading processor is presented. In one embodiment, the processor performs execution of a thread and its duplicate thread in parallel and determines, when in a redundant multithreading mode, whether or not to synchronize an operation of the thread and an operation of the duplicate thread. | 12-15-2011 |
20110307895 | Managing Requests Based on Request Groups - A request management component receives requests to perform an operation. Each of the requests is assigned, based on one or more criteria, to one of multiple different request groups. Based at least in part on execution policies associated with the request groups, determinations are made as to when to submit the requests to one or more recipient. Each of the multiple requests is submitted to one of the recipients when it is determined that the request is to be submitted. | 12-15-2011 |
20110307896 | Method and Apparatus for Scheduling Plural Tasks - A method is provided for scheduling a first task and a second task, wherein the first task is to be performed repeatedly with a predetermined first repetition time interval and the second task is to be performed repeatedly with a predetermined second repetition time interval. The method includes: scheduling the first task for performing the first task at first time points and scheduling the second task for performing the second task at second time points, wherein each of the second time points is different from any of the first time points. Further an apparatus for scheduling a first task and a second task is provided. | 12-15-2011 |
20110307897 | DYNAMICALLY LOADING GRAPH-BASED COMPUTATIONS - Processing data includes: receiving units of work that each include one or more work elements, and processing a first unit of work using a first compiled dataflow graph ( | 12-15-2011 |
20110314473 | SYSTEM AND METHOD FOR GROUPING MULTIPLE PROCESSORS - A distributed multi-processor out-of-order system includes multiple processors, an arbiter, a data dispatcher, a memory controller, a storage unit, multiple memory access requests issued by the multiple processors, and multiple data units that provide the results of the multiple memory access requests. Each of the multiple memory access requests includes a tag that identifies the priority of the processor that issued the memory access request, a processor identification number that identifies the processor that issued the request, and a processor access sequence number that identifies the order that the particular one of the processors issued the request. Each of the data units also includes a tag that specifics the processor identification number, the processor access sequence number, and a data sequence number that identifies the order of the data units satisfying the corresponding one of the memory requests. Using the tags, a distributed arbiter and data dispatcher can execute the requests out-of-order, handle simultaneous memory requests, order the memory requests based on, for example, the priority, return the data units to the processor that requested it, and reassemble the data units. | 12-22-2011 |
20110314474 | HETEROGENEOUS JOB DASHBOARD - This disclosure provides a system and method for summarizing jobs for a user group. In one embodiment, a job manager is operable to identify a state of a first job, the first job associated with a first job scheduler. A state of a second job is identified. The second job is associated with a second job scheduler. The first job scheduler and the second job scheduler are heterogeneous. A summary of information associated with at least the first job scheduler and the second job scheduler is determined using, at least in part, the first job state and the second job state. The summary is presented to a user though a dashboard. | 12-22-2011 |
20110321048 | FACILITATING QUIESCE OPERATIONS WITHIN A LOGICALLY PARTITIONED COMPUTER SYSTEM - A facility is provided for processing to distinguish between a full conventional (or total system) quiesce request within a logically partitioned computer system, which requires all processors of the computer system to remain quiesced for the duration of the quiesce-related operation, and a new early-release conventional quiesce request, which is associated with fast-quiesce request utilization. In accordance with the facility, once all processors have quiesced responsive to a pending quiesce request sequence, the processors are allowed to block early-release conventional quiesce interrupts and to continue processing if there is no total system quiesce request in the pending quiesce request sequence. | 12-29-2011 |
20110321049 | Programmable Integrated Processor Blocks - An integrated processor block of the network on a chip is programmable to perform a first function. The integrated processor block includes an inbox to receive incoming packets from other integrated processor blocks of a network on a chip, an outbox to send outgoing packets to the other integrated processor blocks, an on-chip memory, and a memory management unit to enable access to the on-chip memory. | 12-29-2011 |
20110321050 | METHOD AND APPARATUS FOR PROVIDING SHARED SCHEDULING REQUEST RESOURCES - In accordance with one or more embodiments and corresponding disclosure thereof, various aspects are described in connection with providing shared scheduling request (SR) resources to devices for transmitting SRs. Identifiers related to the shared SR resources can be signaled to the devices along with indications of the shared SR resources in given time durations. Thus, devices can transmit an SR over shared SR resources related to one or more received identifiers for obtaining an uplink grant. This can decrease delay associated with receiving uplink grants since the device need not wait for dedicated SR resources before transmitting the SR. In addition, overhead can be decreased on control channels, as compared to signaling dedicated SR resources and/or uplink grants. Moreover, identifiers related to SR resources can correspond to a grouping of devices, such that a device can transmit over shared SR resources related to a group including the device. | 12-29-2011 |
20110321051 | TASK SCHEDULING BASED ON DEPENDENCIES AND RESOURCES - An example system identifies a set of tasks as being designated for execution, and the set of tasks includes a first task and a second task. The example system accesses task dependency data that corresponds to the second task and indicates that the first task is to be executed prior to the second task. The example system, based on the task dependency data, generates a task dependency model of the set of tasks. The dependency model indicates that the first task is to be executed prior to the second task. The example system schedules an execution of the first task, which is scheduled to use a particular data processing resource. The scheduling is based on the dependency model. | 12-29-2011 |
20120005681 | ASSERTIONS-BASED OPTIMIZATIONS OF HARDWARE DESCRIPTION LANGUAGE COMPILATIONS - Methods and systems for assertion-based simulations of hardware description language are provided. A method may include reading hardware description models of one or more hardware circuits. The hardware description language models may be transformed into a program of instructions configured to, when executed by a processor: (a) assume assertions regarding the hardware description language models are true; (b) establish dependencies among processes of the program of instructions based on the assertions; and (c) dynamically schedule execution of the processes based on the established dependencies. | 01-05-2012 |
20120005682 | HOLISTIC TASK SCHEDULING FOR DISTRIBUTED COMPUTING - Embodiments of the present invention provide a method, system and computer program product for holistic task scheduling in a distributed computing environment. In an embodiment of the invention, a method for holistic task scheduling in a distributed computing environment is provided. The method includes selecting a first task for a first job and a second task for a different, second job, both jobs being scheduled for processing within a node a distributed computing environment by a task scheduler executing in memory by at least one processor of a computer. The method also can include comparing an estimated time to complete the first and second jobs. Finally, the first task can be scheduled for processing in the node when the estimated time to complete the second job exceeds the estimated time to complete the first job. Otherwise the second task can be scheduled for processing in the node when the estimated time to complete the first job exceeds the estimated time to complete the second job. | 01-05-2012 |
20120017216 | DYNAMIC MACHINE-TO-MACHINE COMMUNICATIONS AND SCHEDULING - A method may include obtaining traffic loading and resource utilization information associated with a network for the network time domain; obtaining machine-to-machine resource requirements for machine-to-machine tasks using the network; receiving a target resource utilization value indicative of a network resource limit for the network time domain; calculating a probability for assigning each machine-to-machine task to the network time domain, wherein the probability is based on a difference between the target resource utilization value and the traffic loading and resource utilization associated with the network; calculating a probability density function based on an independent and identically distributed random variable; generating a schedule of execution of the machine-to-machine tasks within the network time domain based on the probabilities associated with the machine-to-machine tasks and the probability density function; and providing the schedule of execution of the machine-to-machine tasks. | 01-19-2012 |
20120017217 | MULTI-CORE PROCESSING SYSTEM AND COMPUTER READABLE RECORDING MEDIUM RECORDED THEREON A SCHEDULE MANAGEMENT PROGRAM - A multi-core processor system has a processing order manager which manages command blocks in a lock acquired state under exclusive control, an assigner which assigns a command block managed by the processing order manager to one of the processor cores, an exclusion manager which manages command blocks in a lock acquisition waiting state under the exclusive control, and a transfer controller which, when the command block in the lock acquisition waiting state managed by the exclusion manager gets into the lock acquired state, releases the command block from the exclusion manager, and registers the command block in the processing order manager, thereby efficiently processing tasks. | 01-19-2012 |
20120023497 | ELECTRONIC DEVICE WITH NETWORK ACCESS FUNCTION - An electronic device with network access function includes an input unit, a storage unit, a wireless network unit and a processing unit. The processing unit includes a scheduling module, a determining module, an accessing module and a downloading module. The scheduling module is configured to receive input from user and schedule online tasks. The determining module is configured to determine when it is time to perform a scheduled task. The accessing module is configured to navigate for the location of the desired information according to the user input when it is time for the scheduled task, and the downloading module is configured to download the desired information according to the user input, and store the desired information in the storage unit. | 01-26-2012 |
20120023498 | LOCAL MESSAGING IN A SCHEDULING HIERARCHY IN A TRAFFIC MANAGER OF A NETWORK PROCESSOR - Described embodiments provide for queuing tasks in a scheduling hierarchy of a network processor. A traffic manager generates a tree scheduling hierarchy having a root scheduler and N scheduler levels. The network processor generates tasks corresponding to received packets. The traffic manager performs a task enqueue operation for the task. The task enqueue operation includes adding the received task to an associated queue of the scheduling hierarchy, where the queue is associated with a data flow of the received task. The queue has a corresponding scheduler level M, where M is a positive integer less than or equal to N. Starting at the queue and iteratively repeating at each scheduling level until reaching the root scheduler, each node in the scheduling hierarchy maintains an actual count of tasks corresponding to the node. Each node communicates a capped task count to a corresponding parent scheduler at a relative next scheduler level. | 01-26-2012 |
20120030680 | System and Method of General Service Management - A system and method is provided for servicing service management requests via a general service management framework that supports a plurality of platforms (for example, Windows®, UNIX®, Linux, Solaris™, and/or other platforms), and that manages local and/or remote machine services at system and/or application level. | 02-02-2012 |
20120030681 | HIGH PERFORMANCE LOCKS - Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component. | 02-02-2012 |
20120036509 | APPARATUS AND METHODS TO CONCURRENTLY PERFORM PER-THREAD AS WELL AS PER-TAG MEMORY ACCESS SCHEDULING WITHIN A THREAD AND ACROSS TWO OR MORE THREADS - A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core. | 02-09-2012 |
20120036510 | Method for Analysing the Real-Time Capability of a System - The invention provides a method for analysing the real-time capability of a system, in particular a computer system, where various tasks are provided, wherein the tasks are repeatedly performed and wherein an execution of a task is triggered by an activation of the task and this represents an event of the task, wherein a plurality of descriptive elements are provided to describe the time correlation of the events as event stream, wherein the event streams may detect the maximum time densities of the events and/or the minimum time densities of the events, and wherein at least a further descriptive element to which an amount of event streams is assigned and which describes the time correlation of an entirety of events which are captured by at least two event streams. | 02-09-2012 |
20120036511 | CONTROL DEVICE FOR DIE-SINKING ELECTRICAL DISCHARGE MACHINE - A program analyzing unit that extracts electrode numbers included in a plurality of processing programs, determines duplication of the electrode numbers among the processing programs to display a result of determination, and that stores correspondence between a revision electrode number that is specified by a user and an in-use electrode number that is used in the processing program for each of the processing programs and a program executing unit that executes each of the processing programs by reading the revision electrode number instead of the in-use electrode number used in each of the processing programs based on the stored correspondence at the time of execution of the processing programs are included, and duplication of the electrode numbers used among the programs is easily and certainly resolved. | 02-09-2012 |
20120042315 | METHOD AND SYSTEM FOR CONTROLLING A SCHEDULING ORDER PER CATEGORY IN A MUSIC SCHEDULING SYSTEM - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc) with respect to particular slots within the scheduling order. The scheduling order can be configured to include a forward order per category or a reverse order per category with respect to the playback of the multimedia content in order to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations. A reverse scheduling technique provides an ideal rotation of songs when a pre-programmed show interferes with a normal rotation. Any rotational compromises can be buried in off-peak audience listening hours of the programming day using the disclosed reverse scheduling technique. | 02-16-2012 |
20120042316 | METHOD AND SYSTEM FOR CONTROLLING A SCHEDULING ORDER PER DAYPART CATEGORY IN A MUSIC SCHEDULING SYSTEM - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc) with respect to particular slots within the scheduling order. The broadcast day is divided into dayparts according to specific time slots. The dayparts are assigned with specific daypart categories wherein multimedia is scheduled. The scheduling order can be configured to include a slotted by daypart scheduling technique to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations. | 02-16-2012 |
20120042317 | APPLICATION PRE-LAUNCH TO REDUCE USER INTERFACE LATENCY - A device stores a plurality of applications and a list of associations for those applications. The applications are preferably stored within a secondary memory of the device, and once launched each application is loaded into RAM. Each application is preferably associated to one or more of the other applications. Preferably, no applications are launched when the device is powered on. A user selects an application, which is then launched by the device, thereby loading the application from the secondary memory to RAM. Whenever an application is determined to be associated with a currently active state application, and that associated application has yet to be loaded from secondary memory to RAM, the associated application is pre-launched such that the associated application is loaded into RAM, but is set to an inactive state. | 02-16-2012 |
20120047507 | SELECTIVE CONSTANT COMPLEXITY DISMISSAL IN TASK SCHEDULING - Various embodiments for selective constant complexity dismissal in task scheduling of a plurality of tasks are provided. A strictly increasing function is implemented to generate a plurality of unique creation stamps, each of the plurality of unique creation stamps increasing over time pursuant to the strictly increasing function. A new task to be placed with the plurality of tasks is labeled with a new unique creation stamp of the plurality of unique creation stamps. The one of the list of dismissal rules holds a minimal valid creation (MVC) stamp, which is updated when a dismissal action for the one of the list of dismissal rules is executed. The dismissal action acts to dismiss a selection of tasks over time due to continuous dispatch. | 02-23-2012 |
20120047508 | Resource Tracking Method and Apparatus - The present invention is directed to a parallel processing infrastructure, which enables the robust design of task scheduler(s) and communication primitive(s). This is achieved, in one embodiment of the present invention, by decomposing the general problem of exploiting parallelism into three parts. First, an infrastructure is provided to track resources. Second, a method is offered by which to expose the tracking of the aforementioned resources to task scheduler(s) and communication primitive(s). Third, a method is established by which task scheduler(s) in turn may enable and/or disable communication primitive(s). In this manner, an improved parallel processing infrastructure is provided. | 02-23-2012 |
20120060160 | COMPONENT-SPECIFIC DISCLAIMABLE LOCKS - Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource. | 03-08-2012 |
20120060161 | UI FRAMEWORK DISPLAY SYSTEM AND METHOD - A technique of efficiently improving the processing speed and response time of a user interface (UI) framework in a multi-core environment is provided. According to the technique, it is possible to improve both the throughput and response time of a UI by causing a plurality of workers to process a frame display command. | 03-08-2012 |
20120060162 | SYSTEMS AND METHODS FOR PROVIDING A SENIOR LEADER APPROVAL PROCESS - Systems and methods of managing tasks within a customer relationship management system. A user with appropriate permissions who is assigned a task can create subtasks subordinate to the assigned task in order to delegate responsibility for completing the task. An owner of a task can seek input from other users by creating an approval route. A user interface is provided to display tasks assigned to a user in an approval route, and to allow a user to provide feedback on tasks assigned to them without having to sort through irrelevant information. | 03-08-2012 |
20120066683 | BALANCED THREAD CREATION AND TASK ALLOCATION - Methods for balancing thread creation and task scheduling are provided for predictable tasks. A list of tasks is sorted according to a predicted completion time for each task. Then tasks are assigned to threads in order of total predicted completion time, and the threads are scheduled to execute the tasks assigned to the threads on a processor. | 03-15-2012 |
20120066684 | CONTROL SERVER, VIRTUAL SERVER DISTRIBUTION METHOD - When plural virtual servers are distributed to plural physical servers, efficient distribution is performed in terms of the processing capacity of the physical servers and their power consumption. Firstly a second load of each virtual server in future is predicted based on a first load in a prescribed time period up to the present of each of the plural virtual servers. Next, the schedule is determined to distribute the plural virtual servers to the plural physical servers based on the second load of each virtual server so that a total of the second loads of one or a plurality of the virtual servers distributed to a physical server is within a prescribed range of proportion with respect to processing capacity of the physical server. Furthermore, the distribution is instructed (execution of redistribution) in accordance with the schedule. | 03-15-2012 |
20120066685 | SCHEDULING REALTIME INFORMATION STORAGE SYSTEM ACCESS REQUESTS - Access requests ( | 03-15-2012 |
20120072916 | FUTURE SYSTEM THAT CAN PARTICIPATE IN SYSTEMS MANAGEMENT ACTIVITIES UNTIL AN ACTUAL SYSTEM IS ON-LINE - Hardware configuration management is provided. A hardware configuration manager includes a proposed new hardware configuration item for an existing production environment and its hardware configuration management software. A completed detailed setup of the management of the proposed hardware configuration item is completed before the proposed hardware configuration item is available. The detailed setup includes at least configuring policies of the proposed hardware configuration item. The hardware configuration manager also comprises a device for preventing scheduled tasks from running until a predefined period following activation of a new hardware configuration item that has the completed detailed setup and the proposed hardware configuration item is mapped thereto. | 03-22-2012 |
20120079486 | INTEGRATION OF DISSIMILAR JOB TYPES INTO AN EARLIEST DEADLINE FIRST (EDF) SCHEDULE - A system for inserting jobs into a scheduler of a processor includes the processor and the scheduler. The processor executes instructions related to a plurality of jobs. The scheduler implements an earliest deadline first (EDF) scheduling model. The scheduler also receives a plurality of jobs from an EDF schedule. The scheduler also receives a separate job from a source other than the EDF schedule. The separate job has a fixed scheduling requirement. The separate job also may be a short duration sporadic job. The scheduler also inserts the separate job into an execution plan of the processor in response to a determination that an available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. | 03-29-2012 |
20120079487 | Subscriber-Based Ticking Model for Platforms - A central manager receives tick subscription requests from subscribers, including a requested period and an allowable variance. The manager selects a group period for a group of requests, based on requested period(s) and allowable variance(s). In some cases, the group period is not a divisor of every requested period but nonetheless provides at least one tick within the allowable variance of each requested period. Ticks may be issued by invoking a callback function. Ticks may be issued in a priority order based on the subscriber's category, e.g., whether it is a user-interface process. An application platform may send a tick subscription request on behalf of an application process, e.g., a mobile device platform may submit subscription requests for processes which execute on a mobile computing device. Tick subscription requests may be sent during application execution, e.g., while the application's user interface is being built or modified. | 03-29-2012 |
20120079488 | Execute at commit state update instructions, apparatus, methods, and systems - An apparatus including an execution logic that includes circuitry to execute instructions, and an instruction execution scheduler logic coupled with the execution logic. The instruction execution scheduler logic is to receive an execute at commit state update instruction. The instruction execution scheduler logic includes at commit state update logic that is to wait to schedule the execute at commit state update instruction for execution until the execute at commit state update instruction is a next instruction to commit. Other apparatus, methods, and systems are also disclosed. | 03-29-2012 |
20120079489 | Method, Computer Readable Medium And System For Dynamic, Designer-Controllable And Role-Sensitive Multi-Level Properties For Taskflow Tasks Using Configuration With Persistence - The method includes determining whether or not a processing level of the task flow is an intermediate processing level. Generating a new property upon determining that the processing level is an intermediate processing level. Associating a relatively lower level property with the new property and publishing the new property to a relatively higher processing level in the task flow. | 03-29-2012 |
20120084782 | Method and Apparatus for Efficient Memory Replication for High Availability (HA) Protection of a Virtual Machine (VM) - High availability (HA) protection is provided for an executing virtual machine. At a checkpoint in the HA process, the active server suspends the virtual machine; and the active server copies dirty memory pages to a buffer. During the suspension of the virtual machine on the active host server, dirty memory pages are copied to a ring buffer. A copy process copies the dirty pages to a first location in the buffer. At a predetermined benchmark or threshold, a transmission process can begin. The transmission process can read data out of the buffer at a second location to send to the standby host. Both the copy and transmission processes can operate substantially simultaneously on the ring buffer. As such, the ring buffer cannot overflow because the transmission process continues to empty the ring buffer as the copy process continues. This arrangement allows for smaller buffers and prevents buffer overflows. | 04-05-2012 |
20120084783 | AUTOMATED OPERATION LIST GENERATION DEVICE, METHOD AND PROGRAM - Selection of operations in a desired order and, as necessary, input of processing parameters by the user are received. Based on each operation corresponding to the received input, operation information, which classifies the operation corresponding to the input into a non-routine operation, which requires input of a processing parameter during execution of an automated operation list, or a routine operation other than the non-routine operation in advance, is obtained. Then, an automated operation list is generated based on the obtained operation information by registering, if the operation corresponding to the input is a routine operation, the operation corresponding to the input in the automated operation list with associating, as necessary, a necessary processing parameter for the operation with the operation, and registering, if the operation corresponding to the input is a non-routine operation, the operation corresponding to the input in the automated operation list. | 04-05-2012 |
20120096466 | METHOD, SYSTEM AND PROGRAM FOR DEADLINE CONSTRAINED TASK ADMISSION CONTROL AND SCHEDULING USING GENETIC APPROACH - Disclosed is an admission control and scheduling method of deadline constrained tasks. The method comprises: buffering new arriving tasks into a waiting queue; pre-scheduling a new task and a previously admitted task; producing multiple pre-schedules; using the most feasible pre-schedule as an executive schedule; and dispatching the tasks in the executive schedule. | 04-19-2012 |
20120096467 | MICROPROCESSOR OPERATION MONITORING SYSTEM - A microprocessor operation monitoring system whose own tasks are constituted by associating beforehand the task number of the task that is next to be started up, for each of the tasks constituting the program, and abnormality of microprocessor operation is detected by comparing and determining whether or not the announced task and the task to be started up match. | 04-19-2012 |
20120102494 | MANAGING NETWORKS AND MACHINES FOR AN ONLINE SERVICE - A cloud manager assists in deploying and managing networks for an online service. The cloud manager system receives requests to perform operations relating to configuring, updating and performing tasks in networks that are used in providing the online service. The management of the assets may comprise deploying machines, updating machines, removing machines, performing configuration changes on servers, Virtual Machines (VMs), as well as performing other tasks relating to the management. The cloud manager is configured to receive requests through an idempotent and asynchronous application programming interface (API) that can not rely on a reliable network. | 04-26-2012 |
20120102495 | RESOURCE MANAGEMENT IN A MULTI-OPERATING ENVIRONMENT - A method for providing user access to telephony operations in a multi operating environment having memory resources nearly depleted that include determining whether a predetermined first memory threshold of a computing environment has been reached and displaying a user interface corresponding to memory usage; and determining whether a predetermined second memory threshold, greater than the first, of the computing environment has been reached. Restricting computing functionality and allowing user access for telephony operations, corresponding to a mobile device, when the second memory threshold is reached is included as well. Also included is maintaining the computing restriction until the memory usage returns below the second memory threshold. | 04-26-2012 |
20120102496 | RECONFIGURABLE PROCESSOR AND METHOD FOR PROCESSING A NESTED LOOP - A reconfigurable processor which merges an inner loop and an outer loop which are included in a nested loop and allocates the merged loop to processing elements in parallel, thereby reducing processing time to process the nested loop. The reconfigurable processor may extract loop execution frequency information from the inner loop and the outer loop of the nested loop, and may merge the inner loop and the outer loop based on the extracted loop execution frequency information. | 04-26-2012 |
20120110583 | DYNAMIC PARALLEL LOOPING IN PROCESS RUNTIME - Systems and methods for dynamic parallel looping in process runtime environment are described herein. A currently processed process-flow instance of a business process reaches a dynamic loop activity including a repetitive task to be executed with each loop cycle. A predefined expression is evaluated on top of the current data context of the process-flow instance to discover a number of loop cycles for execution within the dynamic loop activity. A number of parallel activities corresponding to the repetitive task recurrences are instantiated and executed in parallel. The results of the parallel activities are coordinated to confirm that the dynamic loop activity is completed. | 05-03-2012 |
20120110584 | SYSTEM AND METHOD OF ACTIVE RISK MANAGEMENT TO REDUCE JOB DE-SCHEDULING PROBABILITY IN COMPUTER CLUSTERS - Systems and methods are provided for generating backup tasks for a plurality of tasks scheduled to run in a computer cluster. Each scheduled task is associated with a target probability for execution, and is executable by a first cluster element and a second cluster element. The system classifies the scheduled tasks into groups based on resource requirements of each task. The system determines the number of backup tasks to be generated. The number of backup tasks is determined in a manner necessary to guarantee that the scheduled tasks satisfy the target probability for execution. The backup tasks are desirably identical for a given group. And each backup task can replace any scheduled task in the given group. | 05-03-2012 |
20120110585 | ENERGY CONSUMPTION OPTIMIZATION IN A DATA-PROCESSING SYSTEM - A method for optimizing energy consumption in a data-processing system comprising a set of data-processing units is disclosed. In one embodiment, such a method includes indicating a set of data-processing jobs to be executed on a data-processing system during a production period. An ambient temperature expected for each data-processing unit during the production period is estimated. The method calculates an execution scheme for the data-processing jobs on the data-processing system. The execution scheme optimizes the energy consumed by the data-processing system to execute the data-processing jobs based on the ambient temperature of the data-processing units. The method then executes the data-processing jobs on the data processing system according to the execution scheme. A corresponding apparatus and computer program product are also disclosed. | 05-03-2012 |
20120110586 | THREAD GROUP SCHEDULER FOR COMPUTING ON A PARALLEL THREAD PROCESSOR - A parallel thread processor executes thread groups belonging to multiple cooperative thread arrays (CTAs). At each cycle of the parallel thread processor, an instruction scheduler selects a thread group to be issued for execution during a subsequent cycle. The instruction scheduler selects a thread group to issue for execution by (i) identifying a pool of available thread groups, (ii) identifying a CTA that has the greatest seniority value, and (iii) selecting the thread group that has the greatest credit value from within the CTA with the greatest seniority value. | 05-03-2012 |
20120117569 | TASK AUTOMATION FOR UNFORMATTED TASKS DETERMINED BY USER INTERFACE PRESENTATION FORMATS - Methods and systems are provided for web page task automation. In one embodiment, the method comprises of the following steps: i) decomposing the high level task into a sequence of anthropomimetic subroutines, ii) decomposing each routine into a series of anthropomimetic actions or steps, for example stored as a unit shares of work, iii) generating computer code to interact with the content of the webpage, for each unit share of work, iv) executing the generated computer code by a web interface module, and transmitting the results of the execution of computer code, steps iii) and iv) being repeated until all steps of a subroutine have been executed, until the sequence of subroutines for a logical task have been achieved. | 05-10-2012 |
20120117570 | INFORMATION PROCESSING APPARATUS, WORKFLOW MANAGEMENT SYSTEM, AND WORKFLOW EXECUTION METHOD - An information processing apparatus sequentially executing one or more processes of a workflow on an input document includes: a workflow-information storage unit storing workflow information; a result storage unit storing a process result; a workflow control unit receiving workflow identification information for identifying the workflow and acquiring workflow information from the workflow-information storage unit on the basis of the workflow identification information; and a result acquiring unit acquiring the process result from the result storage unit based on the result identification information when the workflow information acquired by the workflow control unit includes the result identification information. The workflow control unit acquires the process result from the result acquiring unit and transmits the process result to an apparatus that executes a process subsequent to a process corresponding to the process result in the workflow in order to execute the workflow from a process in the middle of the workflow. | 05-10-2012 |
20120124584 | Event-Based Orchestration in Distributed Order Orchestration System - A distributed order orchestration system is provided that includes an event manager configured to generate and publish a set of events based on a process state and metadata stored in a database. A set of subscribers can consume the set of events, and each subscriber can execute a task based on the consumed event. | 05-17-2012 |
20120124585 | Increasing Parallel Program Performance for Irregular Memory Access Problems with Virtual Data Partitioning and Hierarchical Collectives - A method for increasing performance of an operation on a distributed memory machine is provided. Asynchronous parallel steps in the operation are transformed into synchronous parallel steps. The synchronous parallel steps of the operation are rearranged to generate an altered operation that schedules memory accesses for increasing locality of reference. The altered operation that schedules memory accesses for increasing locality of reference is mapped onto the distributed memory machine. Then, the altered operation is executed on the distributed memory machine to simulate local memory accesses with virtual threads to check cache performance within each node of the distributed memory machine. | 05-17-2012 |
20120124586 | SCHEDULING SCHEME FOR LOAD/STORE OPERATIONS - A method and apparatus are provided to control the order of execution of load and store operations. Also provided is a computer readable storage device encoded with data for adapting a manufacturing facility to create the apparatus. One embodiment of the method includes determining whether a first group, comprising at least one or more instructions, is to be selected from a scheduling queue of a processor for execution using either a first execution mode or a second execution mode. The method also includes, responsive to determining that the first group is to be selected for execution using the second execution mode, preventing selection of the first group until a second group, comprising at least one or more instructions, that entered the scheduling queue prior to the first group is selected for execution. | 05-17-2012 |
20120124587 | THREAD SCHEDULING ON MULTIPROCESSOR SYSTEMS - A thread scheduler may be used in a chip multiprocessor or symmetric multiprocessor system to schedule threads to processors. The scheduler may determine the bandwidth utilization of the two threads in combination and whether that utilization exceeds the threshold value. If so, the threads may be scheduled on different processor clusters that do not have the same paths between the common memory and the processors. If not, then the threads may be allocated on the same processor cluster that shares cache among processors. | 05-17-2012 |
20120124588 | Generating Hardware Accelerators and Processor Offloads - System and method for generating hardware accelerators and processor offloads. System for hardware acceleration. System and method for implementing an asynchronous offload. Method of automatically creating a hardware accelerator. Computerized method for automatically creating a test harness for a hardware accelerator from a software program. System and method for interconnecting hardware accelerators and processors. System and method for interconnecting a processor and a hardware accelerator. Computer implemented method of generating a hardware circuit logic block design for a hardware accelerator automatically from software. Computer program and computer program product stored on tangible media implementing the methods and procedures of the invention. | 05-17-2012 |
20120131583 | ENHANCED BACKUP JOB SCHEDULING - Systems and methods of enhanced backup job scheduling are disclosed. An example method may include determining a number of jobs (n) in a backup set, determining a number of tape drives (m) in the backup device, and determining a number of concurrent disk agents (maxDA) configured for each tape drive. The method may also include defining a scheduling problem based on n, m, and maxDA. The method may also include solving the scheduling problem using an integer programming (IP) formulation to derive a bin-packing schedule that minimizes makespan (S) for the backup set. | 05-24-2012 |
20120131584 | Devices and Methods for Optimizing Data-Parallel Processing in Multi-Core Computing Systems - According to an embodiment of a method of the invention, at least a portion of data to be processed is loaded to a buffer memory of capacity (B). The buffer memory is accessible to N processing units of a computing system. The processing task is divided into processing threads. An optimal number (n) of processing threads is determined by an optimizing unit of the computing system. The n processing threads are allocated to the processing task and executed by at least one of the N processing units. After processing by at least one of N processing units, the processed data is stored on a disk defined by disk sectors, each disk sector having storage capacity (S). The storage capacity (B) of the buffer memory is optimized to be a multiple X of sector storage capacity (S). The optimal number (n) is determined based, at least in part on N, B and S. The system and method are implementable in a multithreaded, multi-processor computing system. The stored encrypted data may be later recalled and decrypting using the same system and method. | 05-24-2012 |
20120131585 | Apparatuses And Methods For Processing Workitems In Taskflows - At least one example embodiment discloses a method of processing a workitem including a plurality of tasks. The method includes transmitting requests for completion to the plurality of tasks, respectively, receiving processed data from a first task of the plurality of tasks in response to the request, the processed data being marked as intended for a second task of the plurality of tasks, changing a counter value associated with the second task, each of the plurality of tasks associated with a counter value, transmitting the processed data to the second task, and determining a state of the workitem based on the counter values. | 05-24-2012 |
20120131586 | APPARATUS AND METHOD FOR CONTROLLING RESPONSE TIME OF APPLICATION PROGRAM - The present invention relates to a multi-control method for management of the response time of a data-centric real-time application program. The present invention integrally models a first system for controlling the response time of CPU operations and a second system for controlling the response time of accessing a storage medium using a MIMO structure and simultaneously controls the response time of the CPU operation and the response time of accessing the storage medium through the configuration by the integrated modeling. According to exemplary embodiments of the present invention, it is possible to more efficiently control the response time than an existing feedback control method. | 05-24-2012 |
20120131587 | HARDWARE DEVICE FOR PROCESSING THE TASKS OF AN ALGORITHM IN PARALLEL - A hardware device for concurrently processing a fixed set of predetermined tasks associated with an algorithm which includes a number of processes, some of the processes being dependent on binary decisions, includes a plurality of task units for processing data, making decisions and/or processing data and making decisions, including source task units and destination task units. A task interconnection logic means interconnect the task units for communicating actions from a source task unit to a destination task unit. Each of the task units includes a processor for executing only a particular single task of the fixed set of predetermined tasks associated with the algorithm in response to a received request action, and a status manager for handling the actions from the source task units and building the actions to be sent to the destination task units. | 05-24-2012 |
20120137298 | Managing Groups of Computing Entities - Managing groups of entities is described. In an embodiment an administrator manages operations on a plurality of entities by constructing a management scenario which defines tasks to be applied on a group of entities. In an example the management scenario includes information on dependencies between entities and information on entity attributes, for example operating system version or CPU usage. In an embodiment an entity management engine converts the tasks and dependencies in the scenario to a management plan. In an example the management plan is a list of operations and conditions to be respected in applying an operation to an entity. In an embodiment the plan can be validated to ensure there are no conflicts. In an embodiment the entity management engine also comprises a scheduler which runs tasks contained in the plan and monitors their outcome. | 05-31-2012 |
20120137299 | MECHANISM FOR YIELDING INPUT/OUTPUT SCHEDULER TO INCREASE OVERALL SYSTEM THROUGHPUT - A mechanism for yielding input/output scheduler to increase overall system throughput is described. A method of embodiments of the invention includes initiating a first process issuing a first input/output (I/O) operation. The first process is initiated by a first I/O scheduling entity running on a computer system. The method further includes yielding, in response to a yield call made by the first I/O scheduling entity, an I/O scheduler to a second I/O scheduling entity to initiate a second process issuing a second I/O operation to complete a transaction including the first and second processes, and committing the transaction to a storage device coupled to the computer system. | 05-31-2012 |
20120137300 | Information Processor and Information Processing Method - According to one embodiment, an information processor includes a plurality of execution units, a storage, a generator, and a controller. The storage stores a plurality of basic modules executable asynchronously with another module and a parallel execution control description that defines an execution rule for the basic modules. The generator generates a task graph in which nodes indicating a plurality of tasks relating to the execution of the basic modules are connected by an edge according to the execution order of the tasks, and the nodes and a node of another module in a data dependency relationship are connected by the edge. The controller controls the assignment of the basic modules to the execution units based on the execution rule. The execution units each function as the generator for a basic module to be processed according to the assignment and executes the basic module according to the task graph. | 05-31-2012 |
20120144393 | MULTI-ISSUE UNIFIED INTEGER SCHEDULER - A method and apparatus for scheduling execution of instructions in a multi-issue processor. The apparatus includes post wake logic circuitry configured to track a plurality of entries corresponding to a plurality of instructions to be scheduled. Each instruction has at least one associated source address and a destination address. The post wake logic circuitry is configured to drive a ready input indicating an entry that is ready for execution based on a current match input. A picker circuitry is configured to pick an instruction for execution based the ready input. A compare circuit is configured to determine the destination address for the picked instruction, compare the destination address to the source address for all entries and drive the current match input. | 06-07-2012 |
20120144394 | Energy And Performance Optimizing Job Scheduling - Energy and performance optimizing job scheduling that includes queuing jobs; characterizing jobs as hot or cold, specifying a hot and a cold job sub-queue; iteratively for a number of schedules, until estimated performance and power characteristics of executing jobs in accordance with a schedule meets predefined selection criteria: determining a schedule in dependence upon a user provided parameter, the characterization of each job as hot or cold, and an energy and performance optimizing heuristic; estimating performance and power characteristics of executing the jobs in accordance with the schedule; and determining whether the estimated performance and power characteristics meet the predefined selection criteria. If the estimated performance and power characteristics do not meet the predefined selection criteria, adjusting the user-provided parameter for a next iteration and executing the plurality of jobs in accordance with the determined schedule if the estimated performance and power characteristics meet the predefined selection criteria. | 06-07-2012 |
20120151489 | ARCHITECTURE FOR PROVIDING ON-DEMAND AND BACKGROUND PROCESSING - Embodiments are directed to providing schedule-based processing using web service on-demand message handling threads and to managing processing threads based on estimated future workload. In an embodiment, a web service platform receives a message from a client that is specified for schedule-based, background handling. The web service platform includes an on-demand message handling service with processing threads that are configured to perform on-demand message processing. The web service platform loads the on-demand message handling service including the on-demand message handling threads. The web service platform implements the on-demand message handling service's threads to perform background processing on the received client message. The client messages specified for background handling are thus handled as service-initiated on-demand tasks. | 06-14-2012 |
20120151490 | SYSTEM POSITIONING SERVICES IN DATA CENTERS - A system and method are disclosed for managing a data center in terms of power and performance. The system includes at least one system positioning application for managing power costs and performance costs at a data center. The at least one system positioning application may determine a status of a data center in terms of power costs and performance costs or generate configurations to automatically implement a desired target state at the data center. A system configuration compiler is configured to receive a request from the system positioning application associated with a data center management task, convert the request into a set of subtasks, and schedule execution of the subtasks to implement the data center management task. | 06-14-2012 |
20120159493 | ADVANCED SEQUENCING GAP MANAGEMENT - Systems and methods to provide advance sequencing gap management. In example embodiments, a need to generate a proxy gap order for a sequence is detected. Using one or more processors, the proxy gap order is generated based on the detected need. The generated proxy gap order is then inserted into a particular location of the sequence based on the detected need. | 06-21-2012 |
20120159494 | WORKFLOWS AND PRESETS FOR WORKFLOWS - A system generate a workflow identifier, create a workflow that includes a first work unit, assign the workflow identifier to the workflow, update the workflow by adding a second work unit to the workflow, receive a work order to process the workflow, decompose the workflow into constituent work units in response to the work order, instantiate tasks that correspond to the constituent work units, and execute a work unit process for each of the tasks. | 06-21-2012 |
20120159495 | NON-BLOCKING WAIT-FREE DATA-PARALLEL SCHEDULER - Methods, systems, and mediums are described for scheduling data parallel tasks onto multiple thread execution units of processing system. Embodiments of a lock-free queue structure and methods of operation are described to implement a method for scheduling fine-grained data-parallel tasks for execution in a computing system. The work of one of a plurality of worker threads is wait-free with respect to the other worker threads. Each node of the queue holds a reference to a task that may be concurrently performed by multiple thread execution units, but each on a different subset of data. Various embodiments relate to software-based scheduling of data-parallel tasks on a multi-threaded computing platform that does not perform such scheduling in hardware. Other embodiments are also described and claimed. | 06-21-2012 |
20120159496 | Performing Variation-Aware Profiling And Dynamic Core Allocation For A Many-Core Processor - In one embodiment, the present invention includes a processor with multiple cores each having a self-test circuit to determine a frequency profile and a leakage power profile of the corresponding core. In turn, a scheduler is coupled to receive the frequency profiles and the leakage power profiles and to schedule an application on at least some of the cores based on the frequency profiles and the leakage power profiles. Other embodiments are described and claimed. | 06-21-2012 |
20120159497 | ADAPTIVE PROCESS SCHEDULING METHOD FOR EMBEDDED LINUX - Provided is an adaptive process scheduling method for embedded Linux. The adaptive process scheduling method includes calculating a central processing unit (CPU) occupancy time of each of one or more processes, determining whether or not it is necessary to perform adaptive process scheduling, calculating a predetermined weight to be applied to the CPU occupancy time of each process when it is determined that it is necessary to perform adaptive process scheduling, and applying the predetermined weight and updating the CPU occupancy time of each process when it is determined that it is necessary to perform adaptive process scheduling. Accordingly, the adaptive process scheduling method can improve the performance by omitting an unnecessary context exchange compared to the related art and can dynamically cope with an abrupt increase in the number of processes. | 06-21-2012 |
20120167100 | MANUAL SUSPEND AND RESUME FOR NON-VOLATILE MEMORY - An external controller has greater control over control circuitry on a memory die in a non-volatile storage system. The external controller can issue a manual suspend command on a communication path which is constantly monitored by the control circuitry. In response, the control circuitry suspends a task immediately, with essentially no delay, or at a next acceptable point in the task. The external controller similarly has the ability to issue a manual resume command, which can be provided on the communication path when that path has a ready status. The control circuitry can also automatically suspend and resume a task. The external controller can cause a task to be suspended by issuing an illegal read command. The external controller can cause a suspended program task to be aborted by issuing a new program command. | 06-28-2012 |
20120167101 | SYSTEM AND METHOD FOR PROACTIVE TASK SCHEDULING - The described implementations relate to distributed computing. One implementation provides a system that can include an outlier detection component that is configured to identify an outlier task from a plurality of tasks based on runtimes of the plurality of tasks. The system can also include a cause evaluation component that is configured to evaluate a cause of the outlier task. For example, the cause of the outlier task can be an amount of data processed by the outlier task, contention for resources used to execute the outlier task, or a communication link with congested bandwidth that is used by the outlier task to input or output data. The system can also include one or more processing devices configured to execute one or more of the components. | 06-28-2012 |
20120167102 | TAG-BASED DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD THEREOF - A data processing apparatus and a data processing method thereof are provided. The data processing apparatus comprises the buffers, the scheduler and the process nodes. The buffer stores the processed data and unprocessed data about the process nodes. The scheduler uses a tag to indicate the data is in which process and location, and puts the data into the process. The process node actively retrieves the data from the buffer according to the tag, and processes and stores the data in the buffer. By assigning the tag of the data, the data process flow can be established to form a data process pipeline. | 06-28-2012 |
20120167103 | APPARATUS FOR PARALLEL PROCESSING CONTINUOUS PROCESSING TASK IN DISTRIBUTED DATA STREAM PROCESSING SYSTEM AND METHOD THEREOF - Disclosed are an apparatus and a method for parallel processing continuous processing tasks in a distributed data stream processing system. A system for processing a distributed data stream according to an exemplary embodiment of the present invention includes a control node configured to determine whether a parallel processing of continuous processing tasks for an input data stream is required and if the parallel processing is required, instruct to divide the data stream and allocate the continuous processing tasks for processing the data streams to a plurality of distributed processing nodes, and a plurality of distributed processing nodes configured to divide the input data streams, allocate the divided data stream and the continuous processing tasks for processing the divided data streams, respectively, and combine the processing results, according to the instruction of the control node. | 06-28-2012 |
20120167104 | SYSTEM AND METHOD FOR EXTENDING LEGACY APPLICATIONS WITH UNDO/REDO FUNCTIONALITY - In a system and method for recalling a state in an application, a processor may store in a memory data representing a first set of previously executed commands, the first set representing a current application state, and, for recalling a previously extant application state different than the current application state, the processor may modify the data to represent a second set of commands and may execute in sequence the second set of commands. | 06-28-2012 |
20120167105 | DETERMINING THE PROCESSING ORDER OF A PLURALITY OF EVENTS - A method for operating a multi-threading computational system includes: identifying related events; allocating the related events to a first thread; allocating unrelated events to one or more second threads; wherein the events allocated to the first thread are executed in sequence and the events allocated to the one or more second threads are executed in parallel to execution of the first thread. | 06-28-2012 |
20120167106 | THREAD SYNCHRONIZATION METHODS AND APPARATUS FOR MANAGED RUN-TIME ENVIRONMENTS - A example method disclosed herein comprises initiating a first optimistically balanced synchronization to acquire a lock of an object, the first optimistically balanced synchronization comprising a first optimistically balanced acquisition and a first optimistically balanced release to be performed on the lock by a same thread and at a same nesting level, releasing the lock after execution of program code covered by the lock if a stored state of the first optimistically balanced release indicates that the first optimistically balanced release is still valid, the stored state of the first optimistically balanced release being initialized prior to execution of the program code to indicate that the first optimistically balanced release is valid, and throwing an exception after execution of the program code covered by the lock if the stored state of the first optimistically balanced release indicates that the first optimistically balanced release is no longer valid. | 06-28-2012 |
20120167107 | Power Managed Lock Optimization - In an embodiment, a timer unit may be provided that may be programmed to a selected time interval, or wakeup interval. A processor may execute a wait for event instruction, and enter a low power state for the thread that includes the instruction. The timer unit may signal a timer event at the expiration of the wakeup interval, and the processor may exit the low power state in response to the timer event. The thread may continue executing with the instruction following the wait for event instruction. In an embodiment, the processor/timer unit may be used to implement a power-managed lock acquisition mechanism, in which the processor is awakened a number of times to check the lock and execute the wait for event instruction if the lock is not free, after which the thread may block until the lock is free. | 06-28-2012 |
20120174110 | AMORTIZING COSTS OF SHARED SCANS - Techniques for scheduling a plurality of jobs sharing input are provided. The techniques include partitioning one or more input datasets into multiple subcomponents, analyzing a plurality of jobs to determine which of the plurality of jobs require scanning of one or more common subcomponents of the one or more input datasets, and scheduling a plurality of jobs that require scanning of one or more common subcomponents of the one or more input datasets, facilitating a single scanning of the one or more common subcomponents to be used as input by each of the plurality of jobs. | 07-05-2012 |
20120174111 | METHOD TO DETERMINE DRIVER WORKLOAD FUNCTION AND USAGE OF DRIVER WORKLOAD FUNCTION FOR HUMAN-MACHINE INTERFACE PERFORMANCE ASSESSMENT - A method of objectively measuring a driver's ability to operate a motor vehicle user interface. The method includes objectively measuring the driver's ability to perform each one of a plurality of calibration tasks of various degrees of difficulty including an easy task, a medium task, and a difficult task; generating a scale with which to evaluate the driver's ability to operate the user interface, the scale customized for the driver based on the objective measurements of the driver's ability to perform each calibration task; objectively measuring the driver's ability to operate a function of the motor vehicle user interface; and objectively evaluating the driver's ability to operate the function of the motor vehicle user interface using the scale to determine if the user interface is appropriate for the driver. | 07-05-2012 |
20120180054 | METHODS AND SYSTEMS FOR DELEGATING WORK OBJECTS ACROSS A MIXED COMPUTER ENVIRONMENT - A method of delegating work of a computer program across a mixed computing environment is provided. The method includes: performing on one or more processors: allocating a container structure on a first context; delegating a new operation to a second context based on the container; receiving the results of the new operation; and storing the results in the container. | 07-12-2012 |
20120180055 | OPTIMIZING ENERGY USE IN A DATA CENTER BY WORKLOAD SCHEDULING AND MANAGEMENT - Techniques are described for scheduling received tasks in a data center in a manner that accounts for operating costs of the data center. Embodiments of the invention generally include comparing cost-saving methods of scheduling a task to the operating parameters of completing a task—e.g., a maximum amount of time allotted to complete a task. If the task can be scheduled to reduce operating costs (e.g., rescheduled to a time when power is cheaper) and still be performed within the operating parameters, then that cost-saving method is used to create a workload plan to implement the task. In another embodiment, several cost-saving methods are compared to determine the most profitable. | 07-12-2012 |
20120180056 | Heterogeneous Enqueuinig and Dequeuing Mechanism for Task Scheduling - Methods, systems and computer-readable mediums for task scheduling on an accelerated processing device (APD) are provided. In an embodiment, a method comprises: enqueuing one or more tasks in a memory storage module based on the APD; using a software-based enqueuing module; and dequeuing the one or more tasks from the memory storage module using a hardware-based command processor, wherein the command processor forwards the one or more tasks to the shader cote. | 07-12-2012 |
20120180057 | Activity Recording System for a Concurrent Software Environment - An activity recording system for a concurrent software environment executing software threads in a computer system, the activity recording system comprising: a thread state indicator for recording an indication of a synchronization state of a software thread, the indication being associated with an identification of the software thread; a time profiler for polling values of a program counter for a processor of the computer system at regular intervals, the time profiler being adapted to identify and record one or more synchronization states of the software thread based on the polled program counter value and the recorded indication of state. | 07-12-2012 |
20120180058 | Configuring An Application For Execution On A Parallel Computer - Methods, systems, and products are disclosed for configuring an application for execution on a parallel computer that include: booting up a first subset of a plurality of nodes in a serial processing mode; booting up a second subset of the plurality of nodes in a parallel processing mode; profiling, prior to application deployment on the parallel computer, the application to identify the serial segments and the parallel segments of the application; and deploying the application for execution on the parallel computer in dependence upon the profile of the application and proximity within the data communications network of the nodes in the first subset relative to the nodes in the second subset. | 07-12-2012 |
20120185860 | Component Lock Tracing - Methods for lock tracing at a component level. The method includes associating one or more locks with a component of the operating system; initiating lock tracing for the component; and instrumenting the component-associated locks with lock tracing program instructions in response to initiating lock tracing. The locks are selected from a group of locks configured for use by an operating system and individually comprise locking code. The component lock tracing may be static or dynamic. | 07-19-2012 |
20120185861 | MEDIA FOUNDATION MEDIA PROCESSOR - A system and method for a media processor separates the functions of topology creation and maintenance from the functions of processing data through a topology. The system includes a control layer including a topology generating element to generate a topology describing a set of input multimedia streams, one or more sources for the input multimedia streams, a sequence of operations to perform on the multimedia data, and a set of output multimedia streams, and a media processor to govern the passing of the multimedia data as described in the topology and govern the performance of the sequence of multimedia operations on the multimedia data to create the set of output multimedia streams. The core layer includes the input media streams, the sources for the input multimedia streams, one or more transforms to operate on the multimedia data, stream sinks, and media sinks to provide the set of output multimedia streams. | 07-19-2012 |
20120192189 | DYNAMIC TRANSFER OF SELECTED BUSINESS PROCESS INSTANCE STATE - Business processes that may be affected by events, conditions or circumstances that were unforeseen or undefined at modeling time (referred to as unforeseen events) are modeled and/or executed. Responsive to an indication of such an event during process execution, a transfer is performed from the process, in which selected data is stored and the process is terminated. The selected data may then be used by a target process. The target process may be, for instance, a new version of the same process, the same process or a different process. The target process may or may not have existed at the time the process was deployed. | 07-26-2012 |
20120192190 | Host Ethernet Adapter for Handling Both Endpoint and Network Node Communications - A host Ethernet adapter (HEA) and method of managing network communications is provided. The HEA includes a host interface configured for communication with a multi-core processor over a processor bus. The host interface comprises a receive processing element including a receive processor, a receive buffer and a scheduler for dispatching packets from the receive buffer to the receive processor; a send processing element including a send processor and a send buffer; and a completion queue scheduler (CQS) for dispatching completion queue elements (CQE) from the head of the completion queue (CQ) to threads of the multi-core processor in a network node mode. The method comprises operatively coupling an Ethernet adapter to a multi-core processor system via a processor bus, selectively assigning a first plurality of packets to a first queue pair for servicing in an endpoint mode, running a device driver on the multi-core processing system, the device driver controlling the servicing of the first queue pair by dispatching the first plurality of packets to only one processor core of the multi-core processor system, selectively assigning a second plurality of packets to a second queue pair for servicing in a network node mode; and the Ethernet adapter controlling the servicing of the second queue pair by dispatching the second plurality of packets to multiple processor threads. | 07-26-2012 |
20120192191 | EXECUTION OF WORK UNITS IN A HETEROGENEOUS COMPUTING ENVIRONMENT - Work units are transparently offloaded from a main processor to offload processing systems for execution. For a particular work unit, a suitable offload processing system is selected to execute the work unit. This includes determining the requirements of the work unit, including, for instance, the hardware and software requirements; matching those requirements against a set of offload processing systems with an arbitrary set of available resources; and determining if a suitable offload processing system is available. If a suitable offload processing system is available, the work unit is scheduled to execute on that offload processing system with no changes to the work unit itself. Otherwise, the work unit may execute on the main processor or wait to be executed on an offload processing system. | 07-26-2012 |
20120192192 | EVENT PROCESSING - A method, a system and a computer program for parallel event processing in an event processing network (EPN) are disclosed. The EPN has at least one event processing agent (EPA). The method includes assigning an execution mode for the at least one EPA, the execution mode including a concurrent mode and a sequential mode. The execution mode for the at least one EPA is stored in the EPN metadata. The method also includes loading and initializing the EPN. The method further includes routing the event in the EPN and, when an EPA is encountered, depending on the execution mode of the encountered EPA, further processing of the event. Also disclosed are a system and a computer program for parallel event processing in an event processing network (EPN). | 07-26-2012 |
20120192193 | Executing An Application On A Parallel Computer - Methods, systems, and products are disclosed for executing an application on a parallel computer having a plurality of nodes. Executing an application on a parallel computer includes: booting up a first subset of a plurality of nodes in a serial processing mode; booting up a second subset of the plurality of nodes in a parallel processing mode; profiling, prior to application execution, an application to identify serial segments of the application, parallel segments of the application, and application data utilized by each of the serial segments and the parallel segments; and executing the application on the plurality of nodes, including migrating, in dependence upon the profile for the application upon encountering the parallel segments during execution, only specific portions of the application and the application data from the nodes booted up in the serial processing mode to the nodes booted up in the parallel processing mode. | 07-26-2012 |
20120198457 | Method and apparatus for triggering workflow deployment and/or execution - A system and method for triggering deployment of a workflow are provided. The method includes issuing, to a first device (e.g., a server) from application software executing on a second device (e.g., a client computer), an instruction to execute a workflow previously deployed at the first device. The workflow is formed as a function of information associated with a graphical representation of the workflow. The application software may be, for example, software for one or more word-processing, spreadsheet, database, email, instant messenger, presentation, browser, calendar, organizer, media, image-display applications; file management programs and/or operating system shells. Alternatively, the application software may be or include a module associated with such application software. This module may include or be formed as or from one or more plug-ins, add-ons, applets, shared libraries, and/or extensions. | 08-02-2012 |
20120198458 | Methods and Systems for Synchronous Operation of a Processing Device - Embodiments of the present invention provide a method of synchronous operation of a first processing device and a second processing device. The method includes executing a process on the first processing device, responsive to a determination that execution of the process on the first device has reached a serial-parallel boundary, passing an execution thread of the process from the first processing device to the second processing device, and executing the process on the second processing device. | 08-02-2012 |
20120198459 | ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR - A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements. | 08-02-2012 |
20120198460 | Deadlock Detection Method and System for Parallel Programs - A deadlock detection method and computer system for parallel programs. A determination is made that a lock of the parallel programs is no longer used in a running procedure of the parallel programs. A node corresponding to the lock that is no longer used, and edges relating to the lock that is no longer used, are deleted from a lock graph corresponding to the running procedure of the parallel programs in order to acquire an updated lock graph. The lock graph is constructed according to a lock operation of the parallel programs. Deadlock detection is then performed on the updated lock graph. | 08-02-2012 |
20120204181 | RECONFIGURABLE DEVICE, PROCESSING ASSIGNMENT METHOD, PROCESSING ARRANGEMENT METHOD, INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD THEREFOR - According to the present invention, in changing the circuit configuration of a reconfigurable device, a circuit configuration change period is shortened while avoiding a dependency on processing contents without increasing the size of a circuit due to addition of a mechanism. Considering an execution order relation between a plurality of data flows, a setting change count necessary for changing the circuit configuration in changing processing is decreased within a constraint range, thereby shortening the circuit configuration change period. | 08-09-2012 |
20120204182 | PROGRAM GENERATING APPARATUS AND PROGRAM GENERATING METHOD - A program generating apparatus includes a second program generating unit to generate a second program including a memory image that reproduces data used to execute a subsection by a first arithmetic unit, subsection information including initial value information at the start position of the subsection, a program controlling portion to store the memory image in a second storing unit used by a second arithmetic unit, to set the second arithmetic unit to the same state as the first arithmetic unit at the start position of the subsection, and to cause the second arithmetic unit to execute the subsection of a first program, a monitor program including a function needed to execute the first program, and a monitor program initializing portion to make settings for causing the monitor program to provide a service requested when the second arithmetic unit executes the first program. | 08-09-2012 |
20120204183 | ASSOCIATIVE DISTRIBUTION UNITS FOR A HIGH FLOWRATE SYNCHRONIZER/SCHEDULE - An apparatus ( | 08-09-2012 |
20120210323 | DATA PROCESSING CONTROL METHOD AND COMPUTER SYSTEM - A rerunning load is reduced for reducing the risk of exceeding a specified termination time after abnormally ending a job net. Even if the same data processed by jobs within a job net is replaced with split data of sub-jobs and some of sub-jobs have been abnormally ended, the job net is continued. For each split data, a state and/or an execution server ID of each job are stored, and the progress of a job net is managed. Only split data whose state is not “normal” is to be processed by rerunning. Based on states of execution servers, on whether or not intermediate files transferred between jobs is shared among execution servers, and on whether or not an output file is deleted after ending the subsequent job, it is judged whether or not intermediate files can be referred to and from what job the rerun is to be performed. | 08-16-2012 |
20120210324 | Extended Dynamic Optimization Of Connection Establishment And Message Progress Processing In A Multi-Fabric Message Passing Interface Implementation - In one embodiment, the present invention includes a system that can optimize message passing by, at least in part, automatically determining a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests, and preventing processing of new connection requests and data transfer requests outside of a predetermined communication pattern. Other embodiments are described and claimed. | 08-16-2012 |
20120216202 | Restarting Data Processing Systems - Techniques are disclosed that include a computer-implemented method including transmitting a message in response to a predetermined event through a process stage including at least first and second processes being executed as one or more tasks, the message instructing the abortion of the executing of the one or more tasks, and initiating abortion of execution of the one or more tasks by the one or more of the processes on receiving the messages. | 08-23-2012 |
20120216203 | HOLISTIC TASK SCHEDULING FOR DISTRIBUTED COMPUTING - Embodiments of the present invention provide a method, system and computer program product for holistic task scheduling in a distributed computing environment. In an embodiment of the invention, a method for holistic task scheduling in a distributed computing environment is provided. The method includes selecting a first task for a first job and a second task for a different, second job, both jobs being scheduled for processing within a node a distributed computing environment by a task scheduler executing in memory by at least one processor of a computer. | 08-23-2012 |
20120216204 | CREATING A THREAD OF EXECUTION IN A COMPUTER PROCESSOR - Creating a thread of execution in a computer processor, including copying, by a hardware processor opcode called by a user-level process, with no operating system involvement, register contents from a parent hardware thread to a child hardware thread, the child hardware thread being in a wait state, and changing, by the hardware processor opcode, the child hardware thread from the wait state to an ephemeral run state. | 08-23-2012 |
20120216205 | ENERGY-AWARE JOB SCHEDULING FOR CLUSTER ENVIRONMENTS - A job scheduler can select a processor core operating frequency for a node in a cluster to perform a job based on energy usage and performance data. After a job request is received, an energy aware job scheduler accesses data that specifies energy usage and job performance metrics that correspond to the requested job and a plurality of processor core operating frequencies. A first of the plurality of processor core operating frequencies is selected that satisfies an energy usage criterion for performing the job based, at least in part, on the data that specifies energy usage and job performance metrics that correspond to the job. The job is assigned to be performed by a node in the cluster at the selected first of the plurality of processor core operating frequencies. | 08-23-2012 |
20120222033 | OFFLOADING WORK UNITS FROM ONE TYPE OF PROCESSOR TO ANOTHER TYPE OF PROCESSOR - A work unit (e.g., a load module) to be executed on one processor may be eligible to be offloaded and executed on another processor that is heterogeneous from the one processor. The other processor is heterogeneous in that is has a different computing architecture and/or different instruction set from the one processor. A determination is made as to whether the work unit is eligible for offloading. The determination is based, for instance, on the particular type of instructions (e.g., particular type of service call and/or program call instructions) included in the work unit and whether those types of instructions are supported by the other processor. If the instructions of the work unit are supported by the other processor, then the work unit is eligible for offloading. | 08-30-2012 |
20120222034 | ASYNCHRONOUS CHECKPOINT ACQUSITION AND RECOVERY FROM THE CHECKPOINT IN PARALLEL COMPUTER CALCULATION IN ITERATION METHOD - A method and system to acquire checkpoints in making iteration-method computer calculations in parallel and to effectively utilize the acquired data for recovery. At the time of acquiring a checkpoint in parallel calculation that repeats an iteration method, each node independently acquires the checkpoint in parallel with the calculation without stopping the calculation. Thereby, it is possible to perform both of the calculation and the checkpoint acquisition in parallel. In the case where the calculation does not impose an I/O bottleneck, checkpoint acquisition time is overlapped, and execution time is reduced. In this method, checkpoint data including values at different points of time during the acquisition process is acquired. By limiting the use purpose to iteration-method convergence calculations, mixture of the values at the different points of time in the checkpoint data is accepted in the problem that a convergence destination does not depend on an initial value. | 08-30-2012 |
20120227047 | WORKFLOW VALIDATION AND EXECUTION - An apparatus, a computer program product and a computer-implemented method performed by a computerized device, comprising: receiving a description of a workflow, the workflow comprising a plurality of blocks, wherein each block comprises one or more instructions, the plurality of blocks comprising at least a first block and a second block, wherein the first block is adapted to output information, and the second block is adapted to receive the information wherein at least one of the plurality of blocks is associated with a ratio between a number of records input into the block and a number of records output by the block; and validating that the workflow can operate properly, using the ratio, wherein during execution, each of the first block and the second block can keep an internal state and request to receive again data previously received as input. | 09-06-2012 |
20120227048 | FRAMEWORK FOR SCHEDULING MULTICORE PROCESSORS - A method for a framework for scheduling tasks in a multi-core processor or multiprocessor system is provided in the illustrative embodiments. A thread is selected according to an order in a scheduling discipline, the thread being a thread of an application executing in the data processing system, the thread forming the leader thread in a bundle of threads. A value of a core attribute in a set of core attributes is determined according to a corresponding thread attribute in a set of thread attributes associated with the leader thread. A determination is made whether a second thread can be added to the bundle such that the bundle including the second thread will satisfy a policy. If the determining is affirmative, the second thread is added to the bundle. The bundle is scheduled for execution using a core of the multi-core processor. | 09-06-2012 |
20120227049 | JOB SCHEDULING WITH OPTIMIZATION OF POWER CONSUMPTION - A scheduler is provided, which takes into account the location of the data to be accessed by a set of jobs. Once all the dependencies and the scheduling constraints of the plan are respected, the scheduler optimizes the order of the remaining jobs to be run, also considering the location of the data to be accessed. Several jobs needing an access to a dataset on a specific disk may be grouped together so that the grouped jobs are executed in succession, e.g., to prevent activating and deactivating the storage device several times, thus improving the power consumption and also avoiding input output performances degradation. | 09-06-2012 |
20120227050 | CHANGING A SCHEDULER IN A VIRTUAL MACHINE MONITOR - Machine-readable media, methods, and apparatus are described to change a first scheduler in the virtual machine monitor. in some embodiments, a second scheduler is loaded in a virtual machine monitor when the virtual machine monitor is running; and then is activated to handle a scheduling request for a scheduling process in place of the first scheduler, when the virtual machine monitor is running. | 09-06-2012 |
20120233618 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including a selection unit configured to, on the basis of first identification information included in a processing instruction and corresponding to a service, and first association information in which the first identification information is associated with second identification information for identifying an application, select an application to perform the service corresponding to the processing instruction, and an execution unit configured to cause the selected application to perform a process in accordance with the processing instruction. | 09-13-2012 |
20120233619 | USING GATHERED SYSTEM ACTIVITY STATISTICS TO DETERMINE WHEN TO SCHEDULE A PROCEDURE - Provided are a method, system, and computer program product for using gathered system activity statistics to determine when to schedule a procedure. Activity information is gathered in a computer system during time slots for recurring time periods. A high activity value is an activity amount of a slot having a maximum amount of activity and a low activity value is an activity amount of a slot having a minimum amount of activity. A threshold point is determined as a function of the high activity, the low activity, and a threshold percent comprising a percentage value. A selection is made of at least one lull window having a plurality of consecutive time slots each having an activity value lower than the threshold point and the procedure in the computer system is scheduled to be performed during the time slots in the lull window in a future time period. | 09-13-2012 |
20120233620 | SELECTIVE CONSTANT COMPLEXITY DISMISSAL IN TASK SCHEDULING - A strictly increasing function is implemented to generate a plurality of unique creation stamps, each of the plurality of unique creation stamps increasing over time pursuant to the strictly increasing function. A new task to be placed with the plurality of tasks is labeled with a new unique creation stamp of the plurality of unique creation stamps. The one of the list of dismissal rules holds a minimal valid creation (MVC) stamp, which is updated when a dismissal action for the one of the list of dismissal rules is executed. The dismissal action acts to dismiss a selection of tasks over time due to continuous dispatch. | 09-13-2012 |
20120233621 | METHOD, PROGRAM, AND PARALLEL COMPUTER SYSTEM FOR SCHEDULING PLURALITY OF COMPUTATION PROCESSES INCLUDING ALL-TO-ALL COMMUNICATIONS (A2A) AMONG PLURALITY OF NODES (PROCESSORS) CONSTITUTING NETWORK - Optimally scheduling a plurality of computation processes including all-to-all communications (A | 09-13-2012 |
20120233622 | PORTABLE DEVICE AND TASK PROCESSING METHOD AND APPARATUS THEREFOR - A portable device and a task processing method and apparatus for the portable device are provided. The method comprises the steps of: obtaining task requirement information of a user; determining, from a first system and a second system, an execution system for responding to a system task corresponding to the task requirement information based on a predetermined policy; and transmitting the task requirement information to the execution system such that the execution system can execute the system task based on the task requirement information. With the present invention, it is possible to automatically determine, based on the task requirement information, an execution system for executing a system task corresponding to the task requirement information, such that the user operation can be facilitated. | 09-13-2012 |
20120240120 | INFORMATION PROCESSING APPARATUS, POWER CONTROL METHOD, AND COMPUTER PRODUCT - An information processing apparatus includes a first detector that detects a scheduled starting time of an event to be corrected and executed at the current time or thereafter; a second detector that detects in processing contents differing from that of the event detected by the first detector, a scheduled starting time of each event to be executed at the current time or thereafter; a calculator that calculates the difference between the scheduled starting time detected by the first detector and each scheduled starting time detected by the second detector; a determiner that determines a target event for the event to be corrected, based on the calculated differences; and a corrector that corrects the scheduled starting time of the event to be corrected such that an interval becomes short between the scheduled starting time of the event to be corrected and the scheduled starting time of the target event. | 09-20-2012 |
20120240121 | CROSS FUNCTIONAL AREA SERVICE IDENTIFICATION - A cross-functional area service identification method and system. The method includes reading by a computing system, processes. The computing system processes process elements associated with the processes. The computing system identifies a first functional area associated with a first current process element of the process elements and a second functional area associated with a first parent process element of the first current process element. The computing system compares the first functional area to the second functional area and determines if the first functional area comprises a same functional area as the second functional area. The computing system generates and stores results indicating if the first functional area comprises a same functional area as the second functional area. | 09-20-2012 |
20120240122 | WEB-Based Task Management System and Method - A computer system configured to manage a task hierarchy has a task data store configured to store information about a plurality of tasks, the task information including a parent task and a task unique identifier, a task sheet data store configured to store information about a plurality of tasks, the task sheet information including a task sheet unique identifier, and a task to task sheet data store configured to store a plurality of relationships between tasks and task sheets, said relationships including a task unique identifier and a task sheet unique identifier. | 09-20-2012 |
20120240123 | Energy And Performance Optimizing Job Scheduling - Energy and performance optimizing job scheduling that includes queuing jobs; characterizing jobs as hot or cold, specifying a hot and a cold job sub-queue; iteratively for a number of schedules, until estimated performance and power characteristics of executing jobs in accordance with a schedule meets predefined selection criteria: determining a schedule in dependence upon a user provided parameter, the characterization of each job as hot or cold, and an energy and performance optimizing heuristic; estimating performance and power characteristics of executing the jobs in accordance with the schedule; and determining whether the estimated performance and power characteristics meet the predefined selection criteria. If the estimated performance and power characteristics do not meet the predefined selection criteria, adjusting the user-provided parameter for a next iteration and executing the plurality of jobs in accordance with the determined schedule if the estimated performance and power characteristics meet the predefined selection criteria. | 09-20-2012 |
20120246652 | Processor Management Via Thread Status - Various systems, processes, and products may be used to manage a processor. In particular implementations, managing a processor may include the ability to determine whether a thread is pausing for a short period of time and place a wait event for the thread in a queue based on a short thread pause occurring. Managing a processor may also include the ability to activate a delay thread that determines whether a wait time associated with the pause has expired and remove the wait event from the queue based on the wait time having expired. | 09-27-2012 |
20120246653 | GENERIC COMMAND PARSER - A requesting processing unit includes a generic-parser is described, which is adapted to operate together with a specifically configured one or more command-files. A command-file includes one or more structured data elements descriptive of a command, which is available for execution by the processing unit. The data included in the command-file is registered in the computer memory associated with the processing unit. In general generic-parser is configured, in response to an issued command to search, in the computer memory, for data comprised in the data-elements, which is now registered in the computer memory, including information corresponding to the command and use this data in order to generate a request to perform the command. | 09-27-2012 |
20120246654 | Constant Time Worker Thread Allocation Via Configuration Caching - Mechanisms are provided for allocating threads for execution of a parallel region of code. A request for allocation of worker threads to execute the parallel region of code is received from a master thread. Cached thread allocation information identifying prior thread allocations that have been performed for the master thread are accessed. Worker threads are allocated to the master thread based on the cached thread allocation information. The parallel region of code is executed using the allocated worker threads. | 09-27-2012 |
20120246655 | AUTOMATED TIME TRACKING - In a method for automatically tracking time, a computer receives a user identification. The computer automatically starts a first task, based on the received user identification. The computer records a start time for the first task. The computer monitors a state of the first task. The computer automatically records an end time for the first task in response to determining that the state of the first task has changed. | 09-27-2012 |
20120246656 | SCHEDULING OF TASKS TO BE PERFORMED BY A NON-COHERENT DEVICE - A method for scheduling tasks to be processed by one of a plurality of non-coherent processing devices, at least two of the devices being heterogeneous devices and at least some of said tasks being targeted to a specific one of the processing devices. The devices process data that is stored in local storage and in a memory accessible by at least some of the devices. The method includes the steps of: for each of a plurality of non-dependent tasks to be processed by the device, determining consistency operations required to be performed prior to processing the non-dependent task; performing the consistency operations for one of the non-dependent tasks and on completion issuing the task to the device for processing; performing consistency operations for a further non-dependent task such that, on completion of the consistency operations, the device can process the further task. | 09-27-2012 |
20120246657 | EXECUTING INSTRUCTION SEQUENCE CODE BLOCKS BY USING VIRTUAL CORES INSTANTIATED BY PARTITIONABLE ENGINES - A method for executing instructions using a plurality of virtual cores for a processor. The method includes receiving an incoming instruction sequence using a global front end scheduler, and partitioning the incoming instruction sequence into a plurality of code blocks of instructions. The method further includes generating a plurality of inheritance vectors describing interdependencies between instructions of the code blocks, and allocating the code blocks to a plurality of virtual cores of the processor, wherein each virtual core comprises a respective subset of resources of a plurality of partitionable engines. The code blocks are executed by using the partitionable engines in accordance with a virtual core mode and in accordance with the respective inheritance vectors. | 09-27-2012 |
20120246658 | Transactional Memory Preemption Mechanism - Mechanisms for executing a transaction in the data processing system are provided. A transaction checkpoint data structure is generated in internal registers of a processor. The transaction checkpoint data structure stores transaction checkpoint data representing a state of program registers at a time prior to execution of a corresponding transaction. The transaction, which comprises a first portion of code that is to be executed by the processor, is executed. An interrupt of the transaction is received while executing the transaction and, as a result, the transaction checkpoint data is stored to a data structure in a memory of the data processing system. A second portion of code is then executed. A state of the program registers is restored using the data structure in the memory of the data processing system in response to an event occurring causing a switch of execution of the processor back to execution of the transaction. | 09-27-2012 |
20120254873 | COMMAND PATHS, APPARATUSES AND METHODS FOR PROVIDING A COMMAND TO A DATA BLOCK - Command paths, apparatuses, and methods for providing a command to a data block are described. In an example command path, a command receiver is configured to receive a command and a command buffer is coupled to the command receiver and configured to receive the command and provide a buffered command. A command block is coupled to the command buffer to receive the buffered command. The command block is configured to provide the buffered command responsive to a clock signal and is further configured to add a delay before to the buffered command, the delay based at least in part on a shift count. A command tree is coupled to the command block to receive the buffered command and configured to distribute the buffered command to a data block. | 10-04-2012 |
20120254874 | System and Method for Job Management between Mobile Devices - A system for job management between mobile devices includes a processor configured to store in a storage messages and data pertaining to job assignments and reassignments to build a historical record concerning each job assignment and reassignment, to receive from a dispatch terminal requests containing messages and data pertaining to creating and allocating job assignments and reassignments at and between mobile devices, to communicate via a communication interface with the mobile devices, to execute the requests by creating and allocating the job assignments and reassignments at and between the mobile devices, and to output to a storage the messages and data pertaining to the allocation of job assignments and reassignments in order to add to and maintain the historical record in the storage concerning each job assignment and reassignment with respect to the mobile devices. | 10-04-2012 |
20120254875 | Method for Transforming a Multithreaded Program for General Execution - A technique is disclosed for executing a program designed for multi-threaded operation on a general purpose processor. Original source code for the program is transformed from a multi-threaded structure into a computationally equivalent single-threaded structure. A transform operation modifies the original source code to insert code constructs for serial thread execution. The transform operation also replaces synchronization barrier constructs in the original source code with synchronization barrier code that is configured to facilitate serialization. The transformed source code may then be conventionally compiled and advantageously executed on the general purpose processor. | 10-04-2012 |
20120254876 | SYSTEMS AND METHODS FOR COORDINATING COMPUTING FUNCTIONS TO ACCOMPLISH A TASK - Systems and Methods are provided for coordinating computing functions to accomplish a task. The system includes a plurality of standardized executable application modules (SEAMs), each of which is configured to execute on a processor to provide a unique function and to generate an event associated with its unique function. The system includes a configuration file that comprises a dynamic data store (DDS) and a static data store (SDS). The DDS includes an event queue and one or more response queues. The SDS includes a persistent software object that is configured to map a specific event from the event queue to a predefined response record and to indicate a response queue into which the predefined response record is to be placed. The system further includes a workflow service module, the work flow service module being configured to direct communication between the SDS, the DDS and each of the plurality of SEAMs. | 10-04-2012 |
20120254877 | TRANSFERRING ARCHITECTED STATE BETWEEN CORES - A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads. | 10-04-2012 |
20120254878 | MECHANISM FOR OUTSOURCING CONTEXT-AWARE APPLICATION-RELATED FUNCTIONALITIES TO A SENSOR HUB - A mechanism is described for outsourcing context-aware application-related activities to a sensor hub. A method of embodiments of the invention includes outsourcing a plurality of functionalities from an application processor to a sensor hub processor of a sensor hub by configuring the sensor hub processor, and performing one or more context-aware applications using one or more sensors coupled to the sensor hub processor. | 10-04-2012 |
20120254879 | HIERARCHICAL TASK MAPPING - Mapping tasks to physical processors in parallel computing system may include partitioning tasks in the parallel computing system into groups of tasks, the tasks being grouped according to their communication characteristics (e.g., pattern and frequency); mapping, by a processor, the groups of tasks to groups of physical processors, respectively; and fine tuning, by the processor, the mapping within each of the groups. | 10-04-2012 |
20120254880 | THREAD FOLDING TOOL - A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display. | 10-04-2012 |
20120254881 | PARALLEL COMPUTER SYSTEM AND PROGRAM - There is provided a parallel computer system for performing barrier synchronization using a master node and a plurality of worker nodes based on the time to allow for an adaptive setting of the synchronization time. When a task process in a certain worker node has not been completed by a worker determination time, the particular worker node performs a communication to indicate that the process has not been completed, to a master node. When the communication has been received by a master determination time, the master node performs a communication to indicate that the process time is extended by a correction process time, in order to adjust and extend the synchronization time. In this way, it is possible to reduce the synchronization overhead associated with the execution of an application with a relatively large variation in the process time from a synchronization point to the next synchronization point. | 10-04-2012 |
20120260252 | SCHEDULING SOFTWARE THREAD EXECUTION - A computer-implemented method, system, and/or computer program product schedules execution of software threads. A first software thread is executed together with a second software thread as a first software thread pair. A first content, which resulted from executing the first software pair together, of at least one performance counter, is stored. The first software thread is then executed with a third software thread as a second software thread pair, and the resulting second content of the performance counter(s) is stored. An identification is made of a most efficient software thread pair from the first and second software thread pairs. Upon receiving a request to re-execute the first software thread, the first software thread is selectively matched with either the second software thread or the third software thread for execution based on whether the first software thread pair or the second software thread pair has been identified as the most efficient software thread pair. | 10-11-2012 |
20120260253 | MODELING AND CONSUMING BUSINESS POLICY RULES - Concepts and technologies are described herein for modeling and consuming business policy rules. A policy server executes a policy application for modeling and storing the business policy rules. The business policy rules are modeled and stored in a data storage device according to an extensible policy framework architecture that can be tailored by administrators or other entities to support business-specific needs and/or operations. The modeled business policy rules can be used to support enforcement of business policy rules against various business operations, as well as allowing histories and/or other audits of business policy rules to be completed based upon information stored as the business policy rules. | 10-11-2012 |
20120260254 | VISUAL SCRIPTING OF WEB SERVICES FOR TASK AUTOMATION - Tasks are automated using assemblies of services. An interface component allows a user to collect services and to place selected services corresponding to a task to be automated onto a workspace. An analysis component performs an analysis of available data with regard to the selected services provided on the workspace and a configuration component automatically configures inputs of the selected services based upon the analysis of available data without intervention of the user. A dialog component is also provided to allow the user to contribute information to configure one or more of the inputs of the selected services. When processing is complete, an output component outputs a script that is executable to implement the task to be automated. | 10-11-2012 |
20120260255 | Dynamic Test Scheduling - According to one embodiment of the present invention, a system dynamically schedules performance of tasks, and comprises a computer system including at least one processor. The system determines resources required or utilized by each task for performance of that task on a target system, and compares the determined resources of the tasks to identify tasks with similar resource requirements. The identified tasks with similar resource requirements are scheduled to be successively performed on the target system. Embodiments of the present invention further include a method and computer program product for dynamically scheduling performance of tasks in substantially the same manner described above. | 10-11-2012 |
20120266174 | METHODS AND APPARATUS FOR ACHIEVING THERMAL MANAGEMENT USING PROCESSING TASK SCHEDULING - The present invention provides apparatus and methods to perform thermal management in a computing environment. In one embodiment, thermal attributes are associated with operations and/or processing components, and the operations are scheduled for processing by the components so that a thermal threshold is not exceeded. In another embodiment, hot and cool queues are provided for selected operations, and the processing components can select operations from the appropriate queue so that the thermal threshold is not exceeded. | 10-18-2012 |
20120278809 | LOCK BASED MOVING OF THREADS IN A SHARED PROCESSOR PARTITIONING ENVIRONMENT - The present invention provides a computer implemented method and apparatus to assign software threads to a common virtual processor of a data processing system having multiple virtual processors. A data processing system detects cooperation between a first thread and a second thread with respect to a lock associated with a resource of the data processing system. Responsive to detecting cooperation, the data processing system assigns the first thread to the common virtual processor. The data processing system moves the second thread to the common virtual processor, whereby a sleep time associated with the lock experienced by the first thread and the second thread is reduced below a sleep time experienced prior to the detecting cooperation step. | 11-01-2012 |
20120278810 | Scheduling Cool Air Jobs In A Data Center - Scheduling cool air jobs in a data center comprising computers whose operations produce heat and require cooling, cooling resources that provide cooling for the data center, a workload controller that schedules and allocates data processing jobs among the computers, a cooling controller that schedules and allocates cooling jobs among cooling resources, including assigning data processing jobs for execution by computers in the data center; providing, to the cooling controller, information describing data processing jobs scheduled for allocation among the computers in the data center; specifying, by the cooling controller in dependence upon the physical location of the computer to which each job is allocated and the quantity of data processing represented by each job, cooling jobs to be executed by cooling resources; and assigning, by the cooling controller in accordance with the workload allocation schedule to cooling resources in the data center, cooling jobs for execution. | 11-01-2012 |
20120284724 | SYNCHRONIZATION OF WORKFLOWS IN A VIDEO FILE WORKFLOW SYSTEM - A system and method for synchronization of workflows in a video file workflow system. A workflow is created that splits execution of the workflow tasks (in a single, video file workflow) across multiple Content Management Systems (CMSs). When a single workflow is split across two CMSs, which jointly perform the overall workflow, the two resulting workflows are created to essentially mirror each other so that each CMS can track the tasks being executed on the other CMS using synchronization messages. Hence, both CMSs have the same representation of the processing status of the video content at all time. This allows for dual tracking of the workflow process and for independent operations, at different CMSs, when the CMS systems require load balancing. The split-processing based synchronization can be implemented in the workflows themselves or with simple modifications to workflow templates, without requiring any modification of the software of the workflow systems. | 11-08-2012 |
20120284725 | Apparatus and Method for Processing Events in a Telecommunications Network - A processing platform, for example a Java Enterprise Edition (JEE) platform comprises a JEE cluster ( | 11-08-2012 |
20120284726 | PERFORMING PARALLEL PROCESSING OF DISTRIBUTED ARRAYS - One or more computer-readable media store executable instructions that, when executed by processing logic, perform parallel processing. The media store one or more instructions for initiating a single programming language, and identifying, via the single programming language, one or more data distribution schemes for executing a program. The media also store one or more instructions for transforming, via the single programming language, the program into a parallel program with an optimum data distribution scheme selected from the one or more identified data distribution schemes, and allocating the parallel program to two or more labs for parallel execution. The media further store one or more instructions for receiving one or more results associated with the parallel execution of the parallel program from the two or more labs, and providing the one or more results to the program. | 11-08-2012 |
20120291033 | THREAD-RELATED ACTIONS BASED ON HISTORICAL THREAD BEHAVIORS - Various embodiments provide techniques for managing threads based on a thread history. In at least some embodiments, a behavior associated with currently existing threads is observed and a thread-related action is performed. A result of the thread-related action with respect to the currently existing threads, resources associated with the currently existing threads (e.g., hardware and/or data resources), and/or other threads, is then observed. A thread history is recorded (e.g., as part of a thread history database) that includes the behavior associated with the currently existing threads, the thread related action that was performed, and the result of the thread-related action. The thread history can include information about multiple different thread behaviors and can be referenced to determine whether to perform thread-related actions in response to other observed thread behaviors. | 11-15-2012 |
20120291034 | TECHNIQUES FOR EXECUTING THREADS IN A COMPUTING ENVIRONMENT - A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal, the first thread releases the global lock. | 11-15-2012 |
20120291035 | PARALLELIZED PROGRAM CONTROL - A processor comprises a plurality of processing units operating in parallel. Each processing unit is associated with a time signal generator upon the expiry of which the corresponding processing unit is capable to set expired time signal generator to a predefined duration of time. In case an end of the predefined duration of time deviates less than a predefined duration of time from a scheduled expiry of a time signal generator assigned to a different processing unit; predefined duration of time is modified. | 11-15-2012 |
20120291036 | SAFETY CONTROLLER AND SAFETY CONTROL METHOD - Upon occurrence of an abnormality, a safety control can be executed more rapidly. An OS partially includes a partition scheduler that selects and decides a time partition to be subsequently scheduled according to a scheduling pattern including TP | 11-15-2012 |
20120297389 | SYSTEMS AND METHODS ASSOCIATED WITH A PARALLEL SCRIPT EXECUTER - According to some embodiments, a script written in a scripting programming language may be received (e.g., by a script executer). It may be determined that a first line in the script comprises a first comment, and the first comment may be interpreted as an embedded parallel part control statement. Parallel execution of a portion of the script may then be automatically arranged in accordance with the parallel part control statement. | 11-22-2012 |
20120297390 | CREATION OF FLEXIBLE WORKFLOWS USING ARTIFACTS - Execution of flexible workflows using artifacts is described. A workflow execution engine is configured to instantiate a process execution (PE) artifact. The PE artifact includes one or more transitions. The workflow execution engine is further configured to execute the one or more transitions and determine if any of the one or more transitions are new or modified. The workflow execution engine is additionally configured to load and execute new or modified transitions, without reinstantiating the PE artifact, responsive to determining that at least one new or modified transitions exist. | 11-22-2012 |
20120297391 | APPLICATION RESOURCE MODEL COMPOSITION FROM CONSTITUENT COMPONENTS - Techniques for composing an application resource model are disclosed. The techniques include obtaining operator-level metrics from an execution of a data stream processing application according to a first configuration, wherein the application is executed by nodes of the data stream processing system and the application includes processing elements comprised of multiple operators, wherein two or more of the operators are combined in a first combination to form a processing element according to the first configuration, generating operator-level resource functions from the first combination of operators based on the obtained operator-level metrics, and generating a processing element-level resource function using the generated operator-level resource functions to predict a model for the processing element formed by a second combination of operators, the processing element-level resource function representing an application resource model usable for predicting characteristics of the application executed according to a second configuration. | 11-22-2012 |
20120297392 | INFORMATION PROCESSING APPARATUS, COMMUNICATION METHOD, AND STORAGE MEDIUM - The invention relates to an information processing apparatus, which comprises a plurality of communication units connected to a bus in a ring shape. At least one of the plurality of communication units extends a transmission interval when it is determined that the processing unit, which is to execute the next process for received data, is the processing unit, which executes the process after the processing unit corresponding to the at least one of the plurality of communication units, and when it is detected that the process for the received data is suspended. | 11-22-2012 |
20120297393 | Data Collecting Method, Data Collecting Apparatus and Network Management Device - The present invention provides a data collection method and apparatus and a network management device. The method includes: a network management device collecting data files to be processed reported by a network element device; assigning the data files to be processed as a plurality of tasks; adding the assigned tasks into a task queue and extracting tasks from the task queue one by one for processing. According to the present invention, the task work load can be automatically adjusted according to the computer configuration and parameter configuration, and the maximum efficiency of data processing can be achieved under different scenarios. | 11-22-2012 |
20120304178 | CONCURRENT REDUCTION OPTIMIZATIONS FOR THIEVING SCHEDULERS - Concurrent reduction optimizations for thieving schedulers may include a thieving worker thread operable take a task from a first worker thread's task dequeue, the thieving worker thread and the first worker thread having same synchronization point in a program at which the thieving worker thread and the first worker thread can resume their operations. The thieving worker thread may be further operable to create a local copy of memory locations associated with the task in local memory of the thieving worker thread, and store result of the thieving worker executing the task as the local copy. The thieving worker thread may be further operable to atomically perform a reduction operation to a master location that both the thieving worker thread and the first worker thread can access, in response to the thieving worker thread completing the task. | 11-29-2012 |
20120304179 | WORKLOAD-TO-CLOUD MIGRATION ANALYSIS BASED ON CLOUD ASPECTS - Methods and systems for evaluating compatibility of a cloud of computers to perform one or more workload tasks. One or more computing solution aspects are determined that corresponding to one or more sets of workload factors, where the workload factors characterize one or more workloads, to characterize one or more computing solutions. The workload factors are compared to the computing solution aspects in a rule-based system to exclude computing solutions that cannot satisfy the workload factors. A computing solution is selected that has aspects that accommodate all of the workload factors to find a solution that accommodates the one or more individual workloads. | 11-29-2012 |
20120304180 | PROCESS ALLOCATION APPARATUS AND PROCESS ALLOCATION METHOD - A process allocation apparatus includes an evaluation value calculating unit, an internode total communication traffic calculating unit, and a correction evaluation value calculating unit. The evaluation value calculating unit calculates an evaluation value of process allocation in accordance with a hop count and inter-process communication traffic from a communication source node to which a process used as a communication source is allocated to a communication destination node to which a process used as a communication destination is allocated. The internode total communication traffic calculating unit specifies a communication route from the communication source node to the communication destination node and calculates internode total communication traffic indicating that the communication traffic between nodes on the specified communication route. The correction evaluation value calculating unit calculates a correction evaluation value used for the correction in accordance with the calculated evaluation value of the process allocation and the calculated internode total communication traffic. | 11-29-2012 |
20120304181 | SCHEDULING COMPUTER JOBS FOR EXECUTION - A method, system, and apparatus to divide a computing job into micro-jobs and allocate the execution of the micro-jobs to times when needed resources comply with one or more idleness criteria is provided. The micro-jobs are executed on an ongoing basis, but only when the resources needed by the micro-jobs are not needed by other jobs. A software program utilizing this methodology may be run at all times while the computer is powered up without impacting the performance of other software programs running on the same computer system. | 11-29-2012 |
20120304182 | CONTINUOUS OPTIMIZATION OF ARCHIVE MANAGEMENT SCHEDULING BY USE OF INTEGRATED CONTENT-RESOURCE ANALYTIC MODEL - A system and associated method for continuously optimizing data archive management scheduling. A job scheduler receives, from an archive management system, inputs of task information, replica placement data, infrastructure topology data, and resource performance data. The job scheduler models a flow network that represents data content, software programs, physical devices, and communication capacity of the archive management system in various levels of vertices according to the received inputs. An optimal path in the modeled flow network is computed as an initial schedule, and the archive management system performs tasks according to the initial schedule. The operations of scheduled tasks are monitored and the job scheduler produces a new schedule based on feedbacks of the monitored operations and predefined heuristics. | 11-29-2012 |
20120304183 | MULTI-CORE PROCESSOR SYSTEM, THREAD CONTROL METHOD, AND COMPUTER PRODUCT - A multi-core processor system includes multiple cores and memory accessible from the cores, where a given core is configured to detect among the cores, first cores having a highest execution priority level; identify among the detected first cores, a second core that caused access conflict of the memory; and control a third core that is among the cores, excluding the first cores and the identified second core, the third core being controlled to execute for a given interval during an interval when the access conflict occurs, a thread that does not access the memory. | 11-29-2012 |
20120304184 | MULTI-CORE PROCESSOR SYSTEM, COMPUTER PRODUCT, AND CONTROL METHOD - A multi-core processor system includes a multi-core processor and a storage apparatus storing for each application, a reliability level related to operation, where a given core accesses the storage apparatus and is configured to extract from the storage apparatus, the reliability level for a given application that invokes a given thread; judge based on the extracted reliability level and a specified threshold, whether the given application is an application of high reliability; identify, in the multi-core processor, a core that has not been allocated a thread of an application of low reliability, when judging that the given application is an application of high reliability, and identify in the multi-core processor, a core that has not been allocated a thread of an application of high reliability, when judging that the given application is an application of low reliability; and give to the identified core, an invocation instruction for the given thread. | 11-29-2012 |
20120304185 | INFORMATION PROCESSING SYSTEM, EXCLUSIVE CONTROL METHOD AND EXCLUSIVE CONTROL PROGRAM - Features of an information processing system include a stand-by thread count information updating means that updates stand-by thread count information showing a number of threads which wait for release of lock according to a spinlock method, according to state transition of a thread which requests acquisition of predetermined lock; and a stand-by method determining means that determines a stand-by method of a thread which requests the acquisition of the lock based on the stand-by thread count information updated by the stand-by thread count information updating means and an upper limit value of the number of threads which wait according to the predetermined spinlock method. | 11-29-2012 |
20120311589 | SYSTEMS AND METHODS FOR PROCESSING HIERARCHICAL DATA IN A MAP-REDUCE FRAMEWORK - Methods and arrangements for processing hierarchical data in a map-reduce framework. Hierarchical data is accepted, and a map-reduce job is performed on the hierarchical data. This performing of a map-reduce job includes determining a cost of partitioning the data, determining a cost of redefining the job and thereupon selectively performing at least one step taken from the group consisting of: partitioning the data and redefining the job. | 12-06-2012 |
20120311590 | RESCHEDULING ACTIVE DISPLAY TASKS TO MINIMIZE OVERLAPPING WITH ACTIVE PLATFORM TASKS - In general, in one aspect, a mobile device display includes panel electronics, a backlight driver and a rescheduler. The panel electronics is to generate images on an optical stack of the display based on input from a processing platform of the mobile device. The backlight driver is to control operation of a backlight used to illuminate the optical stack so that the user can see the images generated on the display. The rescheduler is to determine when a timing critical task of the processing platform overlaps with a non-timing critical task of the panel electronics or the backlight driver and reschedule the non-timing critical task until the timing critical task is inactive or a visual tolerance limit has been reached. The rescheduling minimizes overlap between the timing critical tasks and non-timing critical tasks and accordingly reduces power consumption without effecting performance or impacting a user's visual experience. | 12-06-2012 |
20120311591 | LICENSE MANAGEMENT IN A CLUSTER ENVIRONMENT - Embodiments are directed to managing and verifying licenses in a cluster computer system environment. In an embodiment, a license management application running on a computer system cluster manager receives a job that has multiple job tasks as well as portions of job information. The license management application determines from the job information how many licenses and computer nodes are to be assigned to the job. The license management application checks out the determined number of licenses from a license distributing application on behalf of the received job. The license management application indicates to a scheduler of the computer system cluster manager that one job task is to be run per checked out license. | 12-06-2012 |
20120311592 | MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, by which a scheduling function of giving a processing order to each of a plurality of tasks is supported. The present invention includes a memory including an operating system having a scheduler configured to perform a second scheduling function on a plurality of tasks, each having a processing order first-scheduled in accordance with a first reference and a processor performing an operation related to the operating system, the processor processing a plurality of the tasks. Moreover, if a first task among a plurality of the first-scheduled tasks meets a second reference, the scheduler performs the second scheduling function by changing the processing orders to enable the first task to be preferentially processed. | 12-06-2012 |
20120311593 | ASYNCHRONOUS CHECKPOINT ACQUISITION AND RECOVERY FROM THE CHECKPOINT IN PARALLEL COMPUTER CALCULATION IN ITERATION METHOD - A method and system to acquire checkpoints in making iteration-method computer calculations in parallel and to effectively utilize the acquired data for recovery. At the time of acquiring a checkpoint in parallel calculation that repeats an iteration method, each node independently acquires the checkpoint in parallel with the calculation without stopping the calculation. Thereby, it is possible to perform both of the calculation and the checkpoint acquisition in parallel. In the case where the calculation does not impose an I/O bottleneck, checkpoint acquisition time is overlapped, and execution time is reduced. In this method, checkpoint data including values at different points of time during the acquisition process is acquired. By limiting the use purpose to iteration-method convergence calculations, mixture of the values at the different points of time in the checkpoint data is accepted in the problem that a convergence destination does not depend on an initial value. | 12-06-2012 |
20120311594 | PROGRAM, DEVICE, AND METHOD FOR BUILDING AND MANAGING WEB SERVICES | 12-06-2012 |
20120311595 | Optimizing Workflow Engines - Techniques for implementing a workflow are provided. The techniques include merging a workflow to create a virtual graph, wherein the workflow comprises two or more directed acyclic graphs (DAGs), mapping each of one or more nodes of the virtual graph to one or more physical nodes, and using a message passing scheme to implement a computation via the one or more physical nodes. | 12-06-2012 |
20120317575 | APPORTIONING SUMMARIZED METRICS BASED ON UNSUMMARIZED METRICS IN A COMPUTING SYSTEM - A computer program product includes a computer readable storage medium containing computer code that, when executed by a computer, implements a method including receiving, by a memory device of the computing system, a log file, the log file comprising unsummarized metrics, the unsummarized metrics being related to a plurality of transactions performed by a program in the computing system, and a summarized metric, the summarized metric being related to the program, wherein the summarized metric comprises accumulated data from the plurality of transactions; selecting an unsummarized metric that reflects a distribution of the summarized metric among the plurality of transactions by a processing device of the computing system; and determining an amount of the summarized metric that belongs to a transaction of the plurality of transactions based on the selected unsummarized metric by the processing device of the computing system. | 12-13-2012 |
20120317576 | method for operating an arithmetic unit - A method for operating an arithmetic unit having at least two computation cores. One signature register which has multiple inputs is assigned in each case to at least two of the at least two computation cores. At least one task is executed by the at least two of the at least two computation cores, an algorithm is computed in each task, results computed by each computation core are written into the assigned signature register, and the results written into the signature registers are compared. | 12-13-2012 |
20120317577 | Pattern Matching Process Scheduler with Upstream Optimization - Processes in a message passing system may be launched when messages having data patterns match a function on a receiving process. The function may be identified by an execution pointer within the process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue. | 12-13-2012 |
20120324455 | MONAD BASED CLOUD COMPUTING - Systems and methods are provided for using monads to facilitate complex computation tasks in a cloud computing environment. In particular, monads can be employed to facilitate creation and execution of data mining jobs for large data sets. Monads can allow for improved error handling for complex computation tasks. Monads can also facilitate identification of opportunities for improving the efficiency of complex computations. | 12-20-2012 |
20120324456 | MANAGING NODES IN A HIGH-PERFORMANCE COMPUTING SYSTEM USING A NODE REGISTRAR - A method of managing nodes in a high-performance computing (HPC) system, which includes a management subsystem and a job scheduler subsystem, includes providing a node registrar subsystem. Logical node management functions are performed with the node registrar subsystem. Other management functions are performed with the management subsystem using the node registrar subsystem. Job scheduling functions are performed with the job scheduler subsystem using the node registrar subsystem. | 12-20-2012 |
20120324457 | USING COMPILER-GENERATED TASKS TO REPRESENT PROGRAMMING ELEMENTS - The present invention extends to methods, systems, and computer program products for representing various programming elements with compiler-generated tasks. Embodiments of the invention enable access to the future state of a method through a handle to a single and composable task object. For example, an asynchronous method is rewritten to generate and return a handle to an instance of a builder object, which represents one or more future states of the asynchronous method. Information about operation of the asynchronous method is then passed through the handle. Accordingly, state of the asynchronous method is trackable prior to and after completing. | 12-20-2012 |
20120324458 | SCHEDULING HETEROGENOUS COMPUTATION ON MULTITHREADED PROCESSORS - Aspects include computation systems that can identify computation instances that are not capable of being reentrant, or are not reentrant capable on a target architecture, or are non-reentrant as a result of having a memory conflict in a particular execution situation. A system can have a plurality of computation units, each with an independently schedulable SIMD vector. Computation instances can be defined by a program module, and a data element(s) that may be stored in a local cache for a particular computation unit. Each local cache does not maintain coherency controls for such data elements. During scheduling, a scheduler can maintain a list of running (or runnable) instances, and attempt to schedule new computation instances by determining whether any new computation instance conflicts with a running instance and responsively defer scheduling. Memory conflict checks can be conditioned on a flag or other indication of the potential for non-reentrancy. | 12-20-2012 |
20120324459 | PROCESSING HIERARCHICAL DATA IN A MAP-REDUCE FRAMEWORK - Methods and arrangements for processing hierarchical data in a map-reduce framework. Hierarchical data is accepted, and a map-reduce job is performed on the hierarchical data. This performing of a map-reduce job includes determining a cost of partitioning the data, determining a cost of redefining the job and thereupon selectively performing at least one step taken from the group consisting of: partitioning the data and redefining the job. | 12-20-2012 |
20120324460 | Thread Execution in a Computing Environment - A technique for executing normally interruptible threads of a process in a non-preemptive manner includes in response to a first entry associated with a first message for a first thread reaching a head of a run queue, receiving, by the first thread, a first wake-up signal. In response to receiving the wake-up signal, the first thread waits for a global lock. In response to the first thread receiving the global lock, the first thread retrieves the first message from an associated message queue and processes the retrieved first message. In response to completing the processing of the first message, the first thread transmits a second wake-up signal to a second thread whose associated entry is next in the run queue. Finally, following the transmitting of the second wake-up signal, the first thread releases the global lock. | 12-20-2012 |
20120331470 | EMITTING COHERENT OUTPUT FROM MULTIPLE THREADS FOR PRINTF - One embodiment of the present invention sets forth a technique for emitting coherent output from multiple threads for the printf( )function. Additionally, parallel (not divergent) execution of the threads for the printf( )function is maintained when possible to improve run-time performance. Processing of the printf( )function is separated into two tasks, gathering of the per thread data and formatting the gathered data according to the formatting codes for display. The threads emit a coherent stream of contiguous segments, where each segment includes the format string for the printf( )function and the gathered data for a thread. The coherent stream is written by the threads and read by a display processor. The display processor executes a single thread to format the gathered data according to the format string for display. | 12-27-2012 |
20120331471 | EXECUTING MOLECULAR TRANSACTIONS - The claimed subject matter provides a method for executing molecular transactions on a distributed platform. The method includes generating a first unique identifier for executing a molecular transaction. The molecular transaction includes a first atomic action. The method further includes persisting a first work list record. The first work list record includes the first unique identifier and a step number for the first atomic action. Additionally, the method includes retrieving, by a first worker process of a runtime, the first work list record. The method also includes executing, by the first worker process, the first atomic action in response to determining that a first successful completion record for the first atomic action does not exist. Further, the method includes persisting, by the first worker process, the first successful completion record for the first atomic action in response to a successful execution of the first atomic action. | 12-27-2012 |
20120331472 | AD HOC GENERATION OF WORK ITEM ENTITY FOR GEOSPATIAL ENTITY BASED ON SYMBOL MANIPULATION LANGUAGE-BASED WORKFLOW ITEM - In one embodiment, a method comprises receiving from a user interface, by a computing device, a request for execution of at least one lambda function in an operation of a geospatial application, the geospatial application having lambda functions for operating on at least one of a workflow item or one or more entities of an ad hoc geospatial directory, the workflow item including at least one of the lambda functions for a workflow in the geospatial application; and executing by the computing device the at least one lambda function to form, in the geospatial application, a work entity that associates the workflow item with one of the entities, the work entity defining execution of the workflow on the one entity. | 12-27-2012 |
20130007751 | METHOD AND SYSTEM FOR SAFE ENQUEUING OF EVENTS - A method and system to facilitate a user level application executing in a first processing unit to enqueue work or task(s) safely for a second processing unit without performing any ring transition. For example, in one embodiment of the invention, the first processing unit executes one or more user level applications, where each user level application has a task to be offloaded to a second processing unit. The first processing unit signals the second processing unit to handle the task from each user level application without performing any ring transition in one embodiment of the invention. | 01-03-2013 |
20130007752 | MIGRATION OF PROCESS INSTANCES - For migrating process instances, first input information describing changes between a first process template and a second process template is received. Second input information describing grouping of said changes is also received. A set of combinations of the first process template and the second process template is determined by applying the changes to the first process template in complete groups as defined by the second input information. | 01-03-2013 |
20130014114 | INFORMATION PROCESSING APPARATUS AND METHOD FOR CARRYING OUT MULTI-THREAD PROCESSING - For a thread where data is to be popped off of queue storage, whether or not there is data that can be popped out of the queue storage accessed is first checked and then the data, if any, is popped. When there is no such data, the thread pushes thread information, including the identification information of its own thread, on the same queue and then releases a processor and shifts to a standby state. For a thread that is to push the data, when there is the thread information in the queue, it is determined that there is a thread waiting for the data, and then the data is sent after the thread information has been popped, which in turn resumes the processing. | 01-10-2013 |
20130014115 | HIERARCHICAL TASK MAPPING - Mapping tasks to physical processors in parallel computing system may include partitioning tasks in the parallel computing system into groups of tasks, the tasks being grouped according to their communication characteristics (e.g., pattern and frequency); mapping, by a processor, the groups of tasks to groups of physical processors, respectively; and fine tuning, by the processor, the mapping within each of the groups. | 01-10-2013 |
20130014116 | UPDATING A WORKFLOW WHEN A USER REACHES AN IMPASSE IN THE WORKFLOW - Provided are a method, system, and article of manufacture for updating a workflow when a user reaches an impasse in the workflow. A workflow program processes user input at a current node in a workflow and provides user input to traverse through at least one workflow path to reach the current node. The workflow program processes user input at the current node to determine whether there is a next node in the workflow for the processed user input. The workflow program transmits information on the current node to an analyzer in response to determining that there is no next node in the workflow. If there are modifications to the current node, then the analyzer transmits to the workflow program an update including the determined modifications to the current node in response to determining the modification. | 01-10-2013 |
20130019245 | SPECIFYING ON THE FLY SEQUENTIAL ASSEMBLY IN SOA ENVIRONMENTSAANM Jalaldeen; AhamedAACI KarnatakaAACO INAAGP Jalaldeen; Ahamed Karnataka INAANM Purohit; Siddharth N.AACI AllenAAST TXAACO USAAGP Purohit; Siddharth N. Allen TX USAANM Sharma; ManishaAACI New DelhiAACO INAAGP Sharma; Manisha New Delhi INAANM Sivakumar; GandhiAACI VictoriaAACO AUAAGP Sivakumar; Gandhi Victoria AUAANM Viswanathan; RamAACI PlanoAAST TXAACO USAAGP Viswanathan; Ram Plano TX US - A method and system for defining an interface of a service in a service-oriented architecture environment. Definitions of atomic tasks of a request or response operation included in a service are received. Unique identifiers corresponding to the atomic tasks are assigned. A sequence map required to implement the service is received. The sequence map is populated with a sequence of the assigned unique identifiers corresponding to a sequence of the atomic tasks of the operation. At runtime, an interface of the service is automatically and dynamically generated to define the service by reading the sequence of unique identifiers in the populated sequence map and assembling the sequence of the atomic tasks based on the read sequence of unique identifiers. | 01-17-2013 |
20130019246 | Managing A Collection Of Assemblies In An Enterprise Intelligence ('EI') FrameworkAANM Reddington; Francis X.AACI SarasotaAAST FLAACO USAAGP Reddington; Francis X. Sarasota FL USAANM Sahota; NeilAACI Costa MesaAAST CAAACO USAAGP Sahota; Neil Costa Mesa CA US - Managing a collection of assemblies in an Enterprise Intelligence (‘EI’) framework, including: identifying, by an assembly collection tool, one or more processes for inclusion in a specification of an assembly, the assembly configured to carry out a business capability upon execution in the EI framework; identifying for each process, by the assembly collection tool, one or more tasks that comprise the process; identifying for each task, by the assembly collection tool, one or more steps that comprise the task; identifying, by the assembly collection tool, a sequence for executing the steps, tasks, and processes in the assembly; generating, in dependence upon the identified processes, tasks, steps, and sequence, the specification of the assembly; and storing the specification in a EI assembly repository. | 01-17-2013 |
20130024864 | Scalable Hardware Mechanism to Implement Time Outs for Pending POP Requests to Blocking Work Queues - Methods and apparatus for minimizing resources for handling time-outs of read requests to a work queue in a work queue memory are described. According to one embodiment of the invention, a work queue execution engine receives a first read request when the work queue is configured in a blocking mode and is empty. A time-out timer is started in response to receiving the first read request. The work queue execution engine receives a second read request while the first read request is still pending, and the work queue is still empty. When the time-out timer expires for the first read request, the work queue execution engine sends an error response for the first read request and restarts the time-out timer for the second read request taking into account an amount of time the second read request has already been pending. | 01-24-2013 |
20130024865 | MULTI-CORE PROCESSOR SYSTEM, COMPUTER PRODUCT, AND CONTROL METHOD - A multi-core processor system includes a core configured to determine whether a task to be synchronized with a given task is present; identify among cores making up the multi-core processor and upon determining that a task to be synchronized with the given task is present, a core to which no non-synchronous task that is not synchronized with another task has been assigned, and identify among cores making up the multi-core processor and upon determining that a task to be synchronized with the given task is not present, a core to which no synchronous task to be synchronized with another task has been assigned; and send to the identified core, an instruction to start the given task. | 01-24-2013 |
20130031555 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR CONDITIONALLY EXECUTING RELATED REPORTS IN PARALLEL BASED ON AN ESTIMATED EXECUTION TIME - In accordance with embodiments, there are provided mechanisms and methods for conditionally executing related reports in parallel based on an estimated execution time. These mechanisms and methods for conditionally executing related reports in parallel based on an estimated execution time can provide parallel execution of related reports when predetermined time-based criteria are met. The ability to conditionally provide parallel execution of related reports can reduce overhead caused by such parallel execution when the time-based criteria is met. | 01-31-2013 |
20130036421 | CONSTRAINED RATE MONOTONIC ANALYSIS AND SCHEDULING - A method for scheduling schedulable entities onto an execution timeline for a processing entity in a constrained environment includes determining available capacity on the execution timeline for the processing entity based on constraints on the execution timeline over a plurality of time periods, wherein schedulable entities can only be scheduled onto the execution timeline during schedulable windows of time that are not precluded by constraints. The method further includes determining whether enough available capacity exists to schedule a schedulable entity with a budget at a rate. The method further includes when enough available capacity exists to schedule the schedulable entity with the budget at the rate, scheduling the schedulable entity onto the execution timeline for the processing entity during a schedulable window of time. The method further includes when the schedulable entity is scheduled onto the execution timeline, updating available capacity to reflect the capacity utilized by the schedulable entity. | 02-07-2013 |
20130036422 | OPTIMIZED DATACENTER MANAGEMENT BY CENTRALIZED TASK EXECUTION THROUGH DEPENDENCY INVERSION - A Datacenter Management Service (DMS) is provided as a platform designed to automate datacenter management tasks that are performed across multiple technology silos and datacenter servers or collections of servers. The infrastructure to perform the automation is provided by integrating heterogeneous task providers and implementations into a set of standardized adapters through dependency inversion. A platform automating datacenter management tasks may include three main components: integration of adapters into an interface allowing a common interface for datacenter task execution, an execution platform that works against the adapters, and implementation of the adapters for a given type of datacenter management task. | 02-07-2013 |
20130042245 | Performing A Global Barrier Operation In A Parallel Computer - Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier. | 02-14-2013 |
20130042246 | SUSPENSION AND/OR THROTTLING OF PROCESSES FOR CONNECTED STANDBY - One or more techniques and/or systems are provided for assigning power management classifications to a process, transitioning a computing environment into a connected standby state based upon power management classifications assigned to processes, and transitioning the computing environment from the connected standby state to an execution state. That is, power management classifications, such as exempt, throttle, and/or suspend, may be assigned to processes based upon various factors, such as whether a process provides desired functionality and/or whether the process provides functionality relied upon for basic operation of the computing environment. In this way, the computing environment may be transitioned into a low power connected standby state that may continue executing desired functionality, while reducing power consumption by suspending and/or throttling other functionality. Because some functionality may still execute, the computing environment may transition into the execution state in a responsive manner to quickly provide a user with up-to-date information. | 02-14-2013 |
20130042247 | Starvationless Kernel-Aware Distributed Scheduling of Software Licenses - Methods, systems, and apparatuses for implementing shared-license management are provided. Shared-license management may be performed by receiving from a remote client a license request to run a process of a shared-license application; adding the process to a queue maintained for processes waiting for license grants; and reserving at least one license instance for the received license request, the at least one license instance comprising a quantum of CPU time for running the process. | 02-14-2013 |
20130042248 | SYSTEM AND METHOD FOR SUPPORTING PARALLEL THREADS IN A MULTIPROCESSOR ENVIRONMENT - A method and system for supporting parallel processing of threads includes receiving a read request for a container from one or more read threads. Next, parallel read access to the container for each read thread may be controlled with a manager module that is coupled to the container. The manager module may receive a mutating request for the container from one or more mutating threads. While other read threads may be accessing the container, the manager module may provide single mutating access to the container in a series. The manager may monitor a reference count in the collection barrier for tracking a number of threads (whether read and/or mutating threads) which are accessing the collection barrier. The manager module may provide a mutex to a mutating thread for locking the container from any other mutating requests while permitting parallel read requests of the same container during the mutating operation. | 02-14-2013 |
20130047162 | EFFICIENT CACHE REUSE THROUGH APPLICATION DETERMINED SCHEDULING - A method of determining a thread from a plurality of threads to execute a task in a multi-processor computer system. The plurality of threads is grouped into at least one subset associated with a cache memory of the computer system. The task has a type determined by a set of instructions. The method obtains an execution history of the subset of plurality of threads and determines a weighting for each of the set of instructions and the set of data, the weightings depending on the type of the task. A suitability of the subset of the threads to execute the task based on the execution history and the determined weightings, is then determined. Subject to the determined suitability of the subset of threads, the method determining a thread from the subset of threads to execute the task using content of the cache memory associated with the subset of threads. | 02-21-2013 |
20130055270 | PERFORMANCE OF MULTI-PROCESSOR COMPUTER SYSTEMS - Embodiments of the invention may improve the performance of multi-processor systems in processing information received via a network. For example, some embodiments may enable configuration of a system such that information received is distributed among multiple processors for efficient processing. A user may select from among multiple configuration options, each configuration option being associated with a particular mode of processing information received. By selecting a configuration option, the user may specify how information received is processed to capitalize on the system's characteristics, such as by aligning processors on the system with certain NICs. As such, the processor(s) aligned with a NIC may perform networking-related tasks associated with information received by that NIC. If initial alignment causes one or more processors to become over-burdened, processing tasks may be dynamically re-distributed to other processors. | 02-28-2013 |
20130055271 | APPARATUS AND METHOD FOR CONTROLLING POLLING - A polling control apparatus includes a scheduler to identify polling applications in an electronic terminal and group together the polling events of the polling applications based on the polling times of the polling applications. A polling application may be in more than one group based on the interval between polling events of the application. A traffic manager of the polling control apparatus manages a data connection between the polling control apparatus and a network to connect the electronic terminal to the network at a polling event start time of the group and maintain the data connection until all polling events in the group are completed. The polling control apparatus may adjust the polling time of applications to be during the data connection or based on an importance parameter of the application. | 02-28-2013 |
20130055272 | PARALLEL RUNTIME EXECUTION ON MULTIPLE PROCESSORS - A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. | 02-28-2013 |
20130055273 | TERMINAL AND APPLICATION MANAGEMENT METHOD THEREOF - A terminal and application synchronization method is provided for simultaneously updating multiple applications. The application synchronization method includes acquiring a synchronization timing of a previously registered synchronization target application or a common synchronization timing of previously registered synchronization target applications, when an application to be synchronized is added; and adjusting the synchronization timing of the added application in consideration of the previous synchronization timing or the common synchronization timing. | 02-28-2013 |
20130055274 | PROCESS EXECUTION COMPONENTS - Automating processes in an automation platform. Specifying a program that when executed by the platform implements the process. The program including process description, process components. The description including component initialization instructions having input parameter(s) and initialized component execution instructions having an execution state. Components having an initialization interface, an execution interface, and at least one of simulation instructions and operation instructions. Components characterized by output parameters, and operative upon receiving input parameters via the initialization interface to initialize the component. Initialized components operative upon receiving an execution state via the execution interface to execute the initialized component in accordance with the execution state, and in absence of operation instructions, to return a simulated output in the format of the output parameters in accordance with the simulation instructions. Executing the instructions by initializing each component in accordance with the component initialization instructions, executing each initialized component in accordance with the instructions. | 02-28-2013 |
20130061230 | SYSTEMS AND METHODS FOR GENERATING REFERENCE RESULTS USING PARALLEL-PROCESSING COMPUTER SYSTEM - A method for debugging an application includes obtaining first and second fusible operation requests; if there is a break point between the first and the second operation request, generating a first set of compute kernels including programs corresponding to the first operation request, but not to the second operation request; and generating a second set of compute kernels including programs corresponding the second operation request, but not to the first operation request; if there is no break point between the first and the second operation request, generating a third set of compute kernels which include programs corresponding to a merge of the first and second operation requests; and arranging for execution of either the first and second, or the third set of compute kernels, further including debugging the first or second set of compute kernels when there is a break point set between the first and second operation requests. | 03-07-2013 |
20130061231 | CONFIGURABLE COMPUTING ARCHITECTURE - A configurable computing system for parallel processing of software applications includes an environment abstraction layer (EAL) for abstracting low-level functions to the software applications; a space layer including a distributed data structure; and a kernel layer including a job scheduler for executing parallel processing programs constructing the software applications according to a configurable mode. | 03-07-2013 |
20130067479 | Establishing A Group Of Endpoints In A Parallel Computer - A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification. | 03-14-2013 |
20130067480 | PROGRAMMABLE WALL STATION FOR AUTOMATED WINDOW AND DOOR COVERINGS - A programmable wall station system for controlling automated coverings includes at least one automated covering adapted to receive command signals, and a computer which includes a processor and a computer connection port. The processor is programmed to receive location input, position input for the automated coverings, schedule input, and generate scheduled events based on any of the received input. A wall station includes a controller and a station connection port that is linkable to the computer connection port. The controller is programmed to receive scheduled events from the processor when the station connection port and computer connection port are linked to one another and generate command signals based on the scheduled events for receipt by the automated covering to control its operation. | 03-14-2013 |
20130067481 | AUDIO FEEDBACK FOR COMMAND LINE INTERFACE COMMANDS - Exemplary method, system, and computer program product embodiments for audio feedback for command line interface (CLI) commands in a computing environment are provided. In one embodiment, by way of example only, auditory notifications are generated for indicating a completion of CLI commands. The auditory notifications are configurable by user preferences. Additional system and computer program product embodiments are disclosed and provide related advantages. | 03-14-2013 |
20130067482 | METHOD FOR CONFIGURING AN IT SYSTEM, CORRESPONDING COMPUTER PROGRAM AND IT SYSTEM - A method designed to configure an IT system having at least one computing core for executing instruction threads, in which each computing core is capable of executing at least two instruction threads at a time in an interlaced manner, and an operating system, being executed on the IT system, capable of providing instruction threads to each computing core. The method includes a step of configuring the operating system being executed in a mode in which it provides each computing core with a maximum of one instruction thread at a time. | 03-14-2013 |
20130067483 | LOCALITY MAPPING IN A DISTRIBUTED PROCESSING SYSTEM - Topology mapping in a distributed processing system that includes a plurality of compute nodes, including: initiating a message passing operation; including in a message generated by the message passing operation, topological information for the sending task; mapping the topological information for the sending task; determining whether the sending task and the receiving task reside on the same topological unit; if the sending task and the receiving task reside on the same topological unit, using an optimal local network pattern for subsequent message passing operations between the sending task and the receiving task; otherwise, using a data communications network between the topological unit of the sending task and the topological unit of the receiving task for subsequent message passing operations between the sending task and the receiving task. | 03-14-2013 |
20130074080 | Timed Iterator - A computer implemented method for processing tasks is disclosed. The method includes invoking a timed iterator, during an event loop pass, without spawning a new thread, wherein the invoking includes passing a task list and a timeout constraint to the timed iterator. The method further includes executing one or more tasks in the task list for a period of time as specified in the timeout constraint, and relinquishing program control to a caller after the period of time. | 03-21-2013 |
20130074081 | MULTI-THREADED QUEUING SYSTEM FOR PATTERN MATCHING - A multi-threaded processor may support efficient pattern matching techniques. An input data buffer may be provided, which may be shared between a fast path and a slow path. The processor may retire the data units in the input data buffer that is not required and thus avoids copying the data unit used by the slow path. The data management and the execution efficiency may be enhanced as multiple threads may be created to verify potential pattern matches in the input data stream. Also, the threads, which may stall may exit the execution units allowing other threads to run. Further, the problem of state explosion may be avoided by allowing the creation of parallel threads, using the fork instruction, in the slow path. | 03-21-2013 |
20130074082 | CONTROL METHOD AND CONTROL DEVICE FOR RELEASING MEMORY - A control method and a control device for releasing memory are provided by the embodiments of the present invention. The present invention relates to the technical field of terminal device program management, which is used for solving the problem of wasting memory resource of terminal devices. The present invention comprises: obtaining information of current running programs in a terminal device; checking programs whose running states are idle in the current running programs according to the obtained information; closing programs whose running states are idle and releasing corresponding memory. According to the present invention, idle programs can be quickly found and then closed, and thereby the memory is saved and user experience is improved. | 03-21-2013 |
20130074083 | SYSTEM AND METHOD FOR HANDLING STORAGE EVENTS IN A DISTRIBUTED DATA GRID - A system and method can handle storage events in a distributed data grid. The distributed data grid cluster includes a plurality of cluster nodes storing data partitions distributed throughout the cluster, each cluster node being responsible for a set of partitions. A service thread, executing on at least one of said cluster nodes in the distributed data grid, is responsible for handling one or more storage events. The service thread can use a worker thread to accomplish synchronous event handling without blocking the service thread. | 03-21-2013 |
20130074084 | DYNAMIC OPERATING SYSTEM OPTIMIZATION IN PARALLEL COMPUTING - A method for dynamic optimization of thread assignments for application workloads in an simultaneous multi-threading (SMT) computing environment includes monitoring and periodically recording an operational status of different processor cores each supporting a number of threads of the thread pool of the SMT computing environment and also operational characteristics of different workloads of a computing application executing in the SMT computing environment. The method further can include identifying by way of the recorded operational characteristics a particular one of the workloads demonstrating a threshold level of activity. Finally, the method can include matching a recorded operational characteristic of the particular one of the workloads to a recorded status of a processor core best able amongst the different processor cores to host execution in one or more threads of the particular one of the workloads and directing the matched processor core to host execution of the particular one of the workloads. | 03-21-2013 |
20130074085 | SYSTEM AND METHOD FOR CONTROLLING CENTRAL PROCESSING UNIT POWER WITH GUARANTEED TRANSIENT DEADLINES - Methods, systems and devices that include a dynamic clock and voltage scaling (DCVS) solution configured to compute and enforce performance guarantees to ensure that a processor does not remain in a busy state (e.g., due to transient workloads) for more than a predetermined amount of time above that which is required for that processor to complete its pre-computed steady state workload. The DCVS may adjust the frequency and/or voltage of a processor based on a variable delay to ensure that the processing core only falls behind its steady state workload by, at most, a predefined maximum amount of work, irrespective of the operating frequency or voltage of the processor. | 03-21-2013 |
20130074086 | PIPELINING PROTOCOLS IN MISALIGNED BUFFER CASES - Systems, methods and articles of manufacture are disclosed for effecting a desired collective operation on a parallel computing system that includes multiple compute nodes. The compute nodes may pipeline multiple collective operations to effect the desired collective operation. To select protocols suitable for the multiple collective operations, the compute nodes may also perform additional collective operations. The compute nodes may pipeline the multiple collective operations and/or the additional collective operations to effect the desired collective operation more efficiently. | 03-21-2013 |
20130081026 | PRECONFIGURED SHORT SCHEDULING REQUEST CYCLE - In communication systems, for example Long Term Evolution (LTE) of the 3rd Generation Partnership Project (3GPP), using two cycles (long and short) to configure uplink (UL) scheduling request (SR) resources, and various ways of configuring a short scheduling request cycle may be able to add flexibility for a network (NW) to configure scheduling request cycles, allowing balance between latency and resource reservation. A method, according to certain embodiments, can include detecting that there is data activity associated with a user equipment and activating a short scheduling request cycle upon the detecting the data. | 03-28-2013 |
20130081027 | Acquiring, presenting and transmitting tasks and subtasks to interface devices - Computationally implemented methods and systems include acquiring one or more subtasks that correspond to portions of one or more tasks configured to be carried out by two or more discrete interface devices, presenting one or more representations corresponding to the one or more subtasks, wherein the one or more representations correspond to the one or more subtasks, and transmitting subtask data corresponding to one or more subtasks in response to selection of one of the one or more corresponding representations. In addition to the foregoing, other method aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081028 | RECEIVING DISCRETE INTERFACE DEVICE SUBTASK RESULT DATA AND ACQUIRING TASK RESULT DATA - Computationally implemented methods and systems include transmitting one or more subtasks corresponding to at least a portion of one or more tasks of acquiring data requested by a task requestor to a plurality of discrete interface devices, obtaining subtask result data corresponding to a result of the one or more subtasks carried out by two or more discrete interface devices of the plurality of discrete interface devices in an absence of information regarding the task of acquiring data and/or the task requestor, and acquiring task result data corresponding to a result of the task of acquiring data using the obtained subtask result data and information regarding the two or more discrete interface devices from which the subtask result data is obtained. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081029 | Methods and devices for receiving and executing subtasks - Computationally implemented methods and systems include receiving subtask data including one or more subtasks that correspond to at least one portion of at least one task requested by a task requestor, wherein the one or more subtasks are configured to be carried out by two or more discrete interface devices, carrying out the one or more subtasks in an absence of information regarding the at least one task and/or the task requestor, and transmitting result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081030 | Methods and devices for receiving and executing subtasks - Computationally implemented methods and systems include receiving subtask data including one or more subtasks that correspond to at least one portion of at least one task requested by a task requestor, wherein the one or more subtasks are configured to be carried out by two or more discrete interface devices, carrying out the one or more subtasks in an absence of information regarding the at least one task and/or the task requestor, and transmitting result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081031 | Receiving subtask representations, and obtaining and communicating subtask result data - Computationally implemented methods and systems include receiving one or more representations of one or more subtasks that correspond to at least one portion of at least one task of acquiring data requested by a task requestor, wherein the one or more subtasks are configured to be carried out by at least two discrete interface devices, obtaining subtask result data in an absence of information regarding the at least one task and/or the task requestor, and communicating the result data comprising a result of carrying out the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081032 | ACQUIRING AND TRANSMITTING EVENT RELATED TASKS AND SUBTASKS TO INTERFACE DEVICES - Computationally implemented methods and systems include detecting an occurrence of an event, acquiring one or more subtasks configured to be carried out by two or more discrete interface devices, the subtasks corresponding to portions of one or more tasks of acquiring information related to the event, facilitating transmission of the one or more subtasks to the two or more discrete interface devices, and receiving data corresponding to a result of the one or more subtasks executed by two or more of the two or more discrete interface devices. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081033 | CONFIGURING INTERFACE DEVICES WITH RESPECT TO TASKS AND SUBTASKS - Computationally implemented methods and systems include configuring a device to acquire one or more subtasks configured to be carried out by at least two discrete interface devices, said one or more subtasks corresponding to portions of one or more tasks of acquiring data requested by a task requestor, facilitating execution of the received one or more subtasks, and controlling access to at least one feature of the device unrelated to the execution of the one or more subtasks, based on successful execution of the one or more subtasks. In addition to the foregoing, other aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081034 | METHOD FOR DETERMINING ASSIGNMENT OF LOADS OF DATA CENTER AND INFORMATION PROCESSING SYSTEM - A load management system for a data center determines assignment of task loads to information processing devices. The data center includes a plurality of servers cooled by heat radiation, in a room isolated from an outdoor space, that allows air to be taken into and discharged from the room. The plurality of processes are assigned to the plurality of servers in order from a process applied with the proportionality coefficient that is smallest among the maximum proportionality coefficients (Ai-max). The proportionality coefficient (Aij) indicates the ratio of temperature of air taken in the servers (j) arranged in the room to a load on the server (i) arranged in the room, and when the server (i) is compared with the respective servers (j) for the proportionality coefficient (Aij) to obtain the maximum proportionality coefficients (Ai-max). | 03-28-2013 |
20130081035 | Adaptively Determining Response Time Distribution of Transactional Workloads - An adaptive mechanism is provided that learns the response time characteristics of a workload by measuring the response times of end user transactions, classifies response times into buckets, and dynamically adjusts the response time distribution as response time characteristics of the workload change. The adaptive mechanism maintains the actual distribution across changes and, thus, helps the end user to understand changes of workload behavior that take place over a longer period of time. The mechanism is stable enough to suppress spikes and returns a constant view of workload behavior, which is required for long term, performance analysis and capacity planning. The mechanism distinguishes between an initial learning phase of establishing the distribution and one or multiple reaction periods. The reaction periods can be for example a fast reaction period for strong fluctuations of the workload behavior and a slow reaction period for small deviations. | 03-28-2013 |
20130081036 | PROVIDING AN ELECTRONIC MARKETPLACE TO FACILITATE HUMAN PERFORMANCE OF PROGRAMMATICALLY SUBMITTED TASKS - A method, system, and computer-readable medium is described for facilitating interactions between task requesters who have tasks that are available to be performed and task performers who are available to perform tasks. In some situations, the tasks to be performed are human performance tasks that use cognitive and other mental skills of human task performers, such as to employ judgment, perception and/or reasoning skills of the human task performers. In addition, in some situations the available tasks are submitted by human task requesters via application programs that programmatically invoke one or more application program interfaces of an electronic marketplace in order to request that the tasks be performed and to receive corresponding results of task performance in a programmatic manner, so that an ensemble of unrelated human agents can interact with the electronic marketplace to collectively perform a wide variety and large number of tasks. | 03-28-2013 |
20130081037 | PERFORMING COLLECTIVE OPERATIONS IN A DISTRIBUTED PROCESSING SYSTEM - Methods, apparatuses, and computer program products for performing collective operations on a hybrid distributed processing system including: determining by at least one task that a parent of the task has failed to send the task data through the tree topology; and determining whether to request the data from a grandparent of the task or a peer of the task in the same tier in the tree topology; and if the task requests the data from the grandparent, requesting the data and receiving the data from the grandparent of the task through the second networking topology; and if the task requests the data from a peer of the task in the same tier in the tree, requesting the data and receiving the data from a peer of the task through the second networking topology. | 03-28-2013 |
20130081038 | MULTIPROCESSOR COMPUTING DEVICE - A computing device includes a first processor configured to operate at a first speed and consume a first amount power and a second processor configured to operate at a second speed and consume a second amount of power. The first speed is greater than the second speed and the first amount of power is greater than the second amount of power. The computing device also includes a scheduler configured to assign processes to the first processor only if the processes utilize their entire timeslice. | 03-28-2013 |
20130086589 | Acquiring and transmitting tasks and subtasks to interface - A system includes a task request receiving module configured to receive task data related to a request to acquire data, a related subtask acquisition module configured to acquire subtasks related to the task data received by the task request receiving module, a two-or-more discrete interface devices selection module configured to select discrete interface devices by analyzing at least one of a status and a characteristic of discrete interface devices, a two-or-more discrete interface devices subtask transmission module configured to transmit one or more subtasks acquired by the related subtask acquisition module to two or more discrete interface devices selected by the two-or-more discrete interface device selection module, and an executed subtask result data receiving module configured to receive result data from at least one of the two-or-more discrete interface devices to which the two-or-more discrete interface devices subtask transmission module transmitted one or more subtasks. | 04-04-2013 |
20130086590 | MANAGING CAPACITY OF COMPUTING ENVIRONMENTS AND SYSTEMS THAT INCLUDE A DATABASE - Capacity of a computing environment that includes a database can be maintained at a target capacity by regulating the usage of one or more of the resources by one or more tasks or activities (e.g., database work). Moreover, the usage of the resource(s) can be regulated based on the extent of use of the resource(s) by one or more other activities not being regulated (e.g., non-database activities that cannot be regulated by a database system). In other words, a target capacity can be maintained by effectively adjusting the extent by which one or more tasks can access one more resources in consideration of the extent by which one or more of the resources are used by one or more other tasks or activities that are not being regulated with respect to their access of the resource(s). | 04-04-2013 |
20130086591 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR CONTROLLING A PROCESS USING A PROCESS MAP - In accordance with embodiments, there are provided mechanisms and methods for controlling a process using a process map. These mechanisms and methods for controlling a process using a process map can enable process operations to execute in order without necessarily having knowledge of one another. The ability to provide the process map can avoid a requirement that the operations themselves be programmed to follow a particular sequence, as can further improve the ease by which the sequence of operations may be changed. | 04-04-2013 |
20130091504 | DATA FLOWS AND THEIR INTERACTION WITH CONTROL FLOWS - A method and apparatus for processing data by a computer and a method of determining data storage requirements of a computer for earning out a data processing task. | 04-11-2013 |
20130097606 | Dynamic Scheduling for Frames Representing Views of a Geographic Information Environment - An exemplary method for scheduling jobs in frames representing views of a geographic information environment is disclosed. An exemplary method includes determining a remaining frame period in a frame representing a view of a geographic information environment. The exemplary method also includes identifying a dynamic job in a scheduling queue. The dynamic job has a non-preemptive section that is between a start of the job and a preemption point of the job. The exemplary method further includes determining an estimated execution time for executing the job. When the estimated execution time is not greater than the remaining frame period, the exemplary method includes executing the non-preemptive section of the job in the frame. When the estimated execution time is greater than the remaining frame period, the exemplary method includes postponing the execution of the job in the frame. | 04-18-2013 |
20130097607 | METHOD, APPARATUS, AND SYSTEM FOR ADAPTIVE THREAD SCHEDULING IN TRANSACTIONAL MEMORY SYSTEMS - An apparatus and method is described herein for adaptive thread scheduling in a transactional memory environment. A number of conflicts in a thread over time are tracked. And if the conflicts exceed a threshold, the thread may be delayed (adaptively scheduled) to avoid conflicts between competing threads. Moreover, a more complex version may track a number of transaction aborts within a first thread that are caused by a second thread over a period, as well as a total number of transactions executed by the first thread over the period. From the tracking, a conflict ratio is determined for the first thread with regard to the second thread. And when the first thread is to be scheduled, it may be delayed if the second thread is running and the conflict ratio is over a conflict ratio threshold. | 04-18-2013 |
20130104132 | COMPOSING ANALYTIC SOLUTIONS - An approach for composing an analytic solution is provided. After associating descriptive schemas with web services and web-based applets, a set of input data sources is enumerated for selection. A desired output type is received. Based on the descriptive schemas that specify required inputs and outputs of the web services and web-based applets, combinations of web services and web-based applets are generated. The generated combinations achieve a result of the desired output type from one of the enumerated input data sources. Each combination is derived from available web services and web-based applets. The combinations include one or more workflows that provide an analytic solution. A workflow whose result satisfies the business objective may be saved. Steps in a workflow may be iteratively refined to generate a workflow whose result satisfies the business objective. | 04-25-2013 |
20130104133 | CONSTRUCTING CHANGE PLANS FROM COMPONENT INTERACTIONS - Techniques for constructing change plans from one or more component interactions are provided. For example, a computer-implemented technique includes observing at least one interaction between two or more components of at least one distributed computing system, consolidating the at least one interaction into at least one interaction pattern, and using the at least one interaction pattern to construct at least one change plan useable for managing the at least one distributed computing system. In another computer-implemented technique, a partial order of two or more changes is determined from at least one component interaction, and is automatically transformed into at least one ordered task, wherein the at least one ordered task is linked by at least one temporal ordering constraint, and is used to generate at least one change plan useable for managing the distributed computing system is generated, wherein the change plan is based on at least one requested change. | 04-25-2013 |
20130104134 | COMPOSING ANALYTIC SOLUTIONS - An approach for composing an analytic solution is provided. After associating descriptive schemas with web services and web-based applets, a set of input data sources is enumerated for selection. A desired output type is received. Based on the descriptive schemas that specify required inputs and outputs of the web services and web-based applets, combinations of web services and web-based applets are generated. The generated combinations achieve a result of the desired output type from one of the enumerated input data sources. Each combination is derived from available web services and web-based applets. The combinations include one or more workflows that provide an analytic solution. A workflow whose result satisfies the business objective may be saved. Steps in a workflow may be iteratively refined to generate a workflow whose result satisfies the business objective. | 04-25-2013 |
20130104135 | DATA CENTER OPERATION - In response to a map task distributed by a job tracker, a map task tracker executes the map task to generate a map output including version information. The map task tracker stores the generated map outputs. The map task tracker informs the job tracker of related information of the map output. In response to a reduce task distributed by the job tracker, the reduce task tracker acquires the map outputs for key names including given version information from the map task trackers, wherein the acquired map outputs include the map outputs with the given version information and historical map outputs with the version information prior to the given version information. The reduce task tracker executes the reduce task on the acquired map outputs. | 04-25-2013 |
20130104136 | OPTIMIZING ENERGY USE IN A DATA CENTER BY WORKLOAD SCHEDULING AND MANAGEMENT - Techniques are described for scheduling received tasks in a data center in a manner that accounts for operating costs of the data center. Embodiments of the invention generally include comparing cost-saving methods of scheduling a task to the operating parameters of completing a task—e.g., a maximum amount of time allotted to complete a task. If the task can be scheduled to reduce operating costs (e.g., rescheduled to a time when power is cheaper) and still be performed within the operating parameters, then that cost-saving method is used to create a workload plan to implement the task. In another embodiment, several cost-saving methods are compared to determine the most profitable. | 04-25-2013 |
20130104137 | MULTIPROCESSOR SYSTEM - A multiprocessor system including a plurality of processors, each including a task scheduler that determines a task execution order of the tasks in a task set to be executed by the processors within a task period which is defined as a period in repeated execution of the task sets, and processors that execute the respective tasks; and a scheduler management device having a command unit configured to issue a command for at least one of the task schedulers to change the task execution order, wherein each of the task schedulers, when receiving the command from the command unit, changes the task execution order of the processors. | 04-25-2013 |
20130111483 | AUTHORING AND USING PERSONALIZED WORKFLOWS | 05-02-2013 |
20130111484 | Identifying and Correcting Hanging Scheduled Tasks | 05-02-2013 |
20130111485 | Network architecture and protocol for cluster of lithography machines | 05-02-2013 |
20130111486 | APPARATUS AND METHOD FOR EXCLUSIVE CONTROL | 05-02-2013 |
20130111487 | Service Orchestration for Intelligent Automated Assistant | 05-02-2013 |
20130117749 | Provisioning and Managing an Application Platform - Platform management may be provided. First, a package may be received. The received package may then be separated into a plurality of deployment groups. Next, a plurality of tasks may be created for deploying the plurality of deployment groups. Then the plurality of tasks may be executed. | 05-09-2013 |
20130117750 | Method and System for Workitem Synchronization - Method, system, and computer program product embodiments for synchronizing workitems on one or more processors are disclosed. The embodiments include executing a barrier skip instruction by a first workitem from the group, and responsive to the executed barrier skip instruction, reconfiguring a barrier to synchronize other workitems from the group in a plurality of points in a sequence without requiring the first workitem to reach the barrier in any of the plurality of points. | 05-09-2013 |
20130117751 | COMPUTE TASK STATE ENCAPSULATION - One embodiment of the present invention sets forth a technique for encapsulating compute task state that enables out-of-order scheduling and execution of the compute tasks. The scheduling circuitry organizes the compute tasks into groups based on priority levels. The compute tasks may then be selected for execution using different scheduling schemes. Each group is maintained as a linked list of pointers to compute tasks that are encoded as task metadata (TMD) stored in memory. A TMD encapsulates the state and parameters needed to initialize, schedule, and execute a compute task. | 05-09-2013 |
20130117752 | HEURISTICS-BASED SCHEDULING FOR DATA ANALYTICS - A scheduler may receive a plurality of jobs for scheduling of execution thereof on a plurality of computing nodes. An evaluation module may provide a common interface for each of a plurality of scheduling algorithms. An algorithm selector may utilize the evaluation module in conjunction with benchmark data for a plurality of jobs of varying types to associate one of the plurality of scheduling algorithms with each job type. A job comparator may compare a current job for scheduling against the benchmark data to determine a current job type of the current job. The evaluation module may further schedule the current job for execution on the plurality of computing nodes, based on the current job type and the associated scheduling algorithm. | 05-09-2013 |
20130117753 | Many-core Process Scheduling to Maximize Cache Usage - A process scheduler for multi-core and many-core processors may place related executable elements that share common data on the same cores. When executed on a common core, sequential elements may store data in memory caches that are very quickly accessed, as opposed to main memory which may take many clock cycles to access the data. The sequential elements may be identified from messages passed between elements or other relationships that may link the elements. In one embodiment, a scheduling graph may be constructed that contains the executable elements and relationships between those elements. The scheduling graph may be traversed to identify related executable elements and a process scheduler may attempt to place consecutive or related executable elements on the same core so that commonly shared data may be retrieved from a memory cache rather than main memory. | 05-09-2013 |
20130117754 | MULTI-CORE SYSTEM AND SCHEDULING METHOD - A multi-core system includes multiple processor cores; a bus connected to the processor cores; multiple peripheral devices accessed by the processor cores via the bus; profile information including information concerning access of the peripheral devices by each task assigned to the processor cores; a monitor that based on the profile information, monitors access requests to the peripheral devices from tasks under execution at the processor cores and prohibits an access request that causes contention at the bus; and a scheduler that when the monitor prohibits an access request that causes contention at the bus, switches to a different task. | 05-09-2013 |
20130125126 | INFORMATION PROCESSING APPARATUS AND METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a user interface, a switching unit, and a computer. The user interface is for a user that operates a first processing unit that runs a first operating system or a second processing unit that runs a second operating system. The switching unit selectively switches between the first processing unit and the second processing unit to be associated with the user interface. The computer functions as the first processing unit. The computer functions as the second processing unit. The computer runs a first application program on the first operating system. The computer activates, on the second operating system, a second application program related to the first application program, in a state in which the first processing unit is associated with the user interface. The computer controls the switching unit upon completion of the activation of the second application program. | 05-16-2013 |
20130125127 | Task Backpressure and Deletion in a Multi-Flow Network Processor Architecture - Described embodiments generate tasks corresponding to packets received by a network processor. A source processing module sends task messages including a task identifier and a task size to a destination processing module. The destination module receives the task message and determines a queue in which to store the task. Based on a used cache counter of the queue and a number of cache lines for the received task, the destination module determines whether the queue has reached a usage threshold. If the queue has reached the threshold, the destination module sends a backpressure message to the source module. Otherwise, if the queue has not reached the threshold, the destination module accepts the received task, stores data of the received task in the queue, increments the used cache counter for the queue corresponding to the number of cache lines for the received task, and processes the received task. | 05-16-2013 |
20130125128 | REALIZING JUMPS IN AN EXECUTING PROCESS INSTANCE - A method for realizing jumps in an executing process instance can be provided. The method can include suspending an executing process instance, determining a current wavefront for the process instance and computing both a positive wavefront difference for a jump target relative to the current wavefront and also a negative wavefront difference for the jump target relative to the current wavefront. The method also can include removing activities from consideration in the process instance and also adding activities for consideration in the process instance both according to the computed positive wavefront difference and the negative wavefront difference, creating missing links for the added activities, and resuming executing of the process instance at the jump target. | 05-16-2013 |
20130132961 | MAPPING TASKS TO EXECUTION THREADS - Tasks are mapped to execution threads of a parallel processing device. Tasks are mapped from the list of tasks to execution threads of the parallel processing device that are free. The parallel processing device is allowed to perform the tasks mapped to the execution threads of the parallel processing device for a predetermined number of execution cycles. When the parallel processing device has performed the tasks mapped to the execution threads of the parallel processing device for the predetermined number of execution cycles, the parallel processing device is suspended from further performing the tasks to allow the parallel processing device to determine which execution threads have completed performance of mapped tasks and are therefore free. | 05-23-2013 |
20130132962 | SCHEDULER COMBINATORS - Scheduler combinators facilitate scheduling. One or more combinators, or operators, can be applied to an existing scheduler to compose a new scheduler or decompose an existing scheduler into multiple facets. | 05-23-2013 |
20130139164 | Business Process Optimization - The present disclosure involves systems, software, and computer implemented methods for optimizing business processes. One process includes identifying a process model to be compiled, the process model including a plurality of process steps for performing a process associated with the process model, identifying at least two sequential process steps within the process model for inclusion within a single transactional boundary, combining the identified at least two sequential process steps within the single transactional boundary, and compiling the identified process model with the identified at least two sequential process steps combined within the single transactional boundary. In some instances, the process model may be represented in a business process modeling notation (BPMN). Combining the identified sequential process steps within the single transactional boundary can include modifying the process model to enclose the sequential process steps into the single transactional boundary. The transactional boundary may be a transactional sub-process in BPMN. | 05-30-2013 |
20130139165 | SYSTEM AND METHOD FOR DISTRIBUTING PROCESSING OF COMPUTER SECURITY TASKS - In a computer system, processing of security-related tasks is delegated to various agent computers. According to various embodiments, a distributed computing service obtains task requests to be performed for the benefit of beneficiary computers, and delegates those tasks to one or more remote agent computers for processing. The delegation is based on a suitability determination as to whether each of the remote agent computers is suitable to perform the processing. Suitability can be based on an evaluation of such parameters as computing capacity and current availability of the remote agent computers against the various tasks to be performed and their corresponding computing resource requirements. This evaluation can be performed according to various embodiments by the agent computers, the distributed computing service, or by a combination thereof. | 05-30-2013 |
20130139166 | DISTRIBUTED DATA STREAM PROCESSING METHOD AND SYSTEM - Embodiments of the present application relate to a distributed data stream processing method, a distributed data stream processing device, a computer program product for processing a raw data stream and a distributed data stream processing system. A distributed data stream processing method is provided. The method includes dividing a raw data stream into a real-time data stream and historical data streams, processing the real-time data stream and the historical data streams in parallel, separately generating respective results of the processing of the real-time data stream and the historical data streams, and integrating the generated processing results. | 05-30-2013 |
20130139167 | Identification of Thread Progress Information - Embodiments relate to a method, apparatus and program product and for capturing thread specific state timing information. The method includes associating a time field and a time valid field to a thread data structure and setting a current time state by determining a previous time state and updating it according to a previously identified method for setting time states. The method further includes determining status of a time valid bit to see if it is set to valid or invalid. When the status is valid, it is made available for reporting. | 05-30-2013 |
20130139168 | Scaleable Status Tracking Of Multiple Assist Hardware Threads - A processor includes an initiating hardware thread, which initiates a first assist hardware thread to execute a first code segment. Next, the initiating hardware thread sets an assist thread executing indicator in response to initiating the first assist hardware thread. The set assist thread executing indicator indicates whether assist hardware threads are executing. A second assist hardware thread initiates and begins executing a second code segment. In turn, the initiating hardware thread detects a change in the assist thread executing indicator, which signifies that both the first assist hardware thread and the second assist hardware thread terminated. As such, the initiating hardware thread evaluates assist hardware thread results in response to both of the assist hardware threads terminating. | 05-30-2013 |
20130145372 | EMBEDDED SYSTEMS AND METHODS FOR THREADS AND BUFFER MANAGEMENT THEREOF - Embedded systems are provided, which includes a processing unit and a memory. The processing unit simultaneously executes first thread having a flag for performing a data acquisition operation and second thread for performing a data process and output operation for the acquired data in the data acquisition operation. The flag is used for indicating whether a state of the first thread is in an execution state or a sleep state. The memory which is coupled to the processing unit provides a shared buffer for the first and second threads. Before executing the second thread, the flag is checked to determine whether to execute the second thread, wherein the second thread is executed when the flag indicates the sleep state while execution of the second thread is suspended when the flag indicates the execution state. | 06-06-2013 |
20130145373 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - There is provided with an information processing apparatus for controlling execution of a plurality of threads which run on a plurality of calculation cores connected to a memory including a plurality of banks. A first selection unit is configured to select a thread as a continuing thread which receives data from other thread, out of threads which process a data group of interest, wherein the number of accesses for a bank associated with the selected thread is less than a predetermined count. A second selection unit is configured to select a thread as a transmitting thread which transmits data to the continuing thread, out of the threads which process the data group of interest. | 06-06-2013 |
20130152090 | Resolving Resource Contentions - A computer-implemented method for managing access to a shared resource of a process may include identifying a plurality of process steps, each process step of the plurality of process steps, when executed, accessing the shared resource at a same time. The method may also include rearranging at least one of the process steps of the plurality of process steps to access the shared resource at a different time. | 06-13-2013 |
20130152091 | Optimized Judge Assignment under Constraints - Described is a technology by which an assignment model is computed to distribute labeling tasks among judging entities (judges). The assignment model is optimized by obtaining accuracy-related data of the judges, e.g., by probing the judges with labeling tasks having a gold standard label and evaluating the judges' labels against the gold standard labels, and optimizing for accuracy. Optimization may be based upon on or more other constraints, such as per-judge cost and/or quota. | 06-13-2013 |
20130152092 | GENERIC VIRTUAL PERSONAL ASSISTANT PLATFORM - A method for assisting a user with one or more desired tasks is disclosed. For example, an executable, generic language understanding module and an executable, generic task reasoning module are provided for execution in the computer processing system. A set of run-time specifications is provided to the generic language understanding module and the generic task reasoning module, comprising one or more models specific to a domain. A language input is then received from a user, an intention of the user is determined with respect to one or more desired tasks, and the user is assisted with the one or more desired tasks, in accordance with the intention of the user. | 06-13-2013 |
20130152093 | Multi-Channel Time Slice Groups - A time slice group (TSG) is a grouping of different streams of work (referred to herein as “channels”) that share the same context information. The set of channels belonging to a TSG are processed in a pre-determined order. However, when a channel stalls while processing, the next channel with independent work can be switched to fully load the parallel processing unit. Importantly, because each channel in the TSG shares the same context information, a context switch operation is not needed when the processing of a particular channel in the TSG stops and the processing of a next channel in the TSG begins. Therefore, multiple independent streams of work are allowed to run concurrently within a single context increasing utilization of parallel processing units. | 06-13-2013 |
20130152094 | ERROR CHECKING IN OUT-OF-ORDER TASK SCHEDULING - One embodiment of the present invention sets forth a technique for error-checking a compute task. The technique involves receiving a pointer to a compute task, storing the pointer in a scheduling queue, determining that the compute task should be executed, retrieving the pointer from the scheduling queue, determining via an error-check procedure that the compute task is eligible for execution, and executing the compute task. | 06-13-2013 |
20130152095 | Expedited Module Unloading For Kernel Modules That Execute Read-Copy Update Callback Processing Code - A technique for expediting the unloading of an operating system kernel module that executes read-copy update (RCU) callback processing code in a computing system having one or more processors. According to embodiments of the disclosed technique, an RCU callback is enqueued so that it can be processed by the kernel module's callback processing code following completion of a grace period in which each of the one or more processors has passed through a quiescent state. An expediting operation is performed to expedite processing of the RCU callback. The RCU callback is then processed and the kernel module is unloaded. | 06-13-2013 |
20130152096 | APPARATUS AND METHOD FOR DYNAMICALLY CONTROLLING PREEMPTION SECTION IN OPERATING SYSTEM - An apparatus for dynamically controlling a preemption section includes a preemption manager configured to monitor whether a system context has changed, and if the system context has changed, set a current preemptive mode according to the changed system context to dynamically control a preemption section of a kernel. Therefore, even when an application requiring real-time processing, such as a health-care application, co-exists with a normal application, optimal performance may be ensured. | 06-13-2013 |
20130160016 | Allocating Compute Kernels to Processors in a Heterogeneous System - A system and method embodiments for optimally allocating compute kernels to different types of processors, such as CPUs and GPUs, in a heterogeneous computer system are disclosed. These include comparing a kernel profile of a compute kernel to respective processor profiles of a plurality of processors in a heterogeneous computer system, selecting at least one processor from the plurality of processors based upon the comparing, and scheduling the compute kernel for execution in the selected at least one processor. | 06-20-2013 |
20130167151 | JOB SCHEDULING BASED ON MAP STAGE AND REDUCE STAGE DURATION - A plurality of job profiles is received. Each job profile describes a job to be executed, and each job includes map tasks and reduce tasks. An execution duration for a map stage including the map tasks and an execution duration for a reduce stage including the reduce tasks of each job is estimated. The jobs are scheduled for execution based on the estimated execution duration of the map stage and the estimated execution duration of the reduce stage of each job. | 06-27-2013 |
20130167152 | MULTI-CORE-BASED COMPUTING APPARATUS HAVING HIERARCHICAL SCHEDULER AND HIERARCHICAL SCHEDULING METHOD - A computing apparatus includes a global scheduler configured to schedule a job group on a first layer, and a local scheduler configured to schedule jobs belonging to the job group according to a set guide on a second layer. The computing apparatus also includes a load monitor configured to collect resource state information associated with states of physical resources and set a guide with reference to the collected resource state information and set policy. | 06-27-2013 |
20130174164 | METHOD AND SYSTEM FOR MANAGING ONE OR MORE RECURRENCIES - The present disclosure discloses methods and systems for managing one or more recurrencies. The method includes defining one or more recurrency tasks, each task having associated recurrency parameters. The method further includes identifying a recurrency period wherein the one or more recurrency tasks are disaggregated into individual scheduled events over the span of the recurrency period. Thereafter, a user-defined exclusionary schedule is applied to the disaggregated set of events. Subsequently, the edited recurrent tasks are output in a pre-defined file format. | 07-04-2013 |
20130174165 | FAULT TOLERANT DISTRIBUTED LOCK MANAGER - A lock manager running on a machine may write a first entry for a first process to a queue associated with a resource. If the first entry is not at a front of the queue, the lock manager identifies a second entry that is at the front of the queue, and determines whether a second process associated with the second entry is operational. If the second process is not operational, the lock manager removes the second entry from the queue. Additionally, if the queue becomes unavailable, the lock manager may initiate failover to a backup copy of the queue. | 07-04-2013 |
20130174166 | Efficient Sequencer - Techniques are disclosed for efficiently sequencing operations performed in multiple threads of execution in a computer system. In one set of embodiments, sequencing is performed by receiving an instruction to advance a designated next ticket value, incrementing the designated next ticket value in response to receiving the instruction, searching a waiters list of tickets for an element having the designated next ticket value, wherein searching does not require searching the entire waiters list, and the waiters list is in a sorted order based on the values of the tickets, and removing the element having the designated next ticket value from the list using a single atomic operation. The element may be removed by setting a waiters list head element, in a single atomic operation, to refer to an element in the list having a value based upon the designated next ticket value. | 07-04-2013 |
20130174167 | INTELLIGENT INCLUSION/EXCLUSION AUTOMATION - Computer systems and computer program products for automating tasks in a computing environment are provided. In one such embodiment, by way of example only, if an instant task is not found in one of list of included tasks and a list of excluded tasks, at least one of the following is performed: the instant task is compared the with previous instances of the task, if any; the instant task is analyzed, including an input/output (I/O) sequence for the instant task, to determine if the instant task is similar to an existing task; and the instant task is considered as a possible candidate for automation. If the instant task is determined to be an automation candidate, the instant task is added to the list of included tasks, otherwise the instant task is added to the list of excluded tasks. | 07-04-2013 |
20130174168 | POLICY-BASED SCALING OF COMPUTING RESOURCES IN A NETWORKED COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach for policy-driven (e.g., price-sensitive) scaling of computing resources in a networked computing environment (e.g., a cloud computing environment). In a typical embodiment, a workload request for a customer will be received and a set of computing resources available to process the workload request will be identified. It will then be determined whether the set of computing resources are sufficient to process the workload request. If the set of computing resources are under-allocated (or are over-allocated), a resource scaling policy may be accessed. The set of computing resources may then be scaled based on the resource scaling policy, so that the workload request can be efficiently processed while maintaining compliance with the resource scaling policy. | 07-04-2013 |
20130174169 | UPDATING WORKFLOW NODES IN A WORKFLOW - Provided a method, system, and article of manufacture for updating workflow nodes in a workflow. A workflow program processes user input at one node in a workflow comprised of nodes and workflow paths connecting the nodes, wherein the user provides user input to traverse through at least one workflow path to reach the current node. The workflow program transmits information on a current node to an analyzer. The analyzer processes the information on the current node to determine whether there are modifications to at least one subsequent node following the current node over at least one workflow path from the current node. The analyzer transmits to the workflow program an update including modifications to the at least one subsequent node in response to determining the modifications. | 07-04-2013 |
20130174170 | PARALLEL COMPUTER, AND JOB INFORMATION ACQUISITION METHOD FOR PARALLEL COMPUTER - A parallel computer includes a plurality of calculation nodes and a management node. A calculation node includes a retention control unit that retains job information in a retention unit in association with an identification number, and the management node includes a retention control unit that retains the job information in a retention unit, retains, as a snapshot, job information of the same identification number in a case where the job information of the same identification number about a calculation node is detected in the retention unit. The retention unit of the calculation node includes a retention region enabling retention of job information corresponding to a plurality of periods, and the retention unit of the management node includes a retention region enabling retention of the job information corresponding to the plurality of periods with respect to each of the calculation nodes. | 07-04-2013 |
20130174171 | INTELLIGENT INCLUSION/EXCLUSION AUTOMATION - Methods, computer systems, and computer program products for automating tasks in a computing environment, are provided. In one such embodiment, by way of example only, if an instant task is not found in one of list of included tasks and a list of excluded tasks, at least one of the following is performed: the instant task is compared the with previous instances of the task, if any; the instant task is analyzed, including an input/output (I/O) sequence for the instant task, to determine if the instant task is similar to an existing task; and the instant task is considered as a possible candidate for automation. If the instant task is determined to be an automation candidate, the instant task is added to the list of included tasks, otherwise the instant task is added to the list of excluded tasks. | 07-04-2013 |
20130179890 | LOGICAL DEVICE DISTRIBUTION IN A STORAGE SYSTEM - Utilization of the processor modules is monitored. A varying load pattern including at least one of a bursty behavior or an oscillatory behavior of the processor modules is identified. Distribution of logical devices between processor modules is performed. | 07-11-2013 |
20130185725 | SCHEDULING AND EXECUTION OF COMPUTE TASKS - One embodiment of the present invention sets forth a technique for selecting a first processor included in a plurality of processors to receive work related to a compute task. The technique involves analyzing state data of each processor in the plurality of processors to identify one or more processors that have already been assigned one compute task and are eligible to receive work related to the one compute task, receiving, from each of the one or more processors identified as eligible, an availability value that indicates the capacity of the processor to receive new work, selecting a first processor to receive work related to the one compute task based on the availability values received from the one or more processors, and issuing, to the first processor via a cooperative thread array (CTA), the work related to the one compute task. | 07-18-2013 |
20130185726 | Method for Synchronous Execution of Programs in a Redundant Automation System - A method for synchronous execution of programs in a redundant automation system comprising at least two subsystems, wherein at least one request for execution of one of the programs is taken as a basis for starting a scheduling pass, and during this scheduling pass a decision is taken as to whether this one program is executed on each of the subsystems. Suitable measures are proposed which allow all programs a fair and deterministic share of the program execution based on their priorities. | 07-18-2013 |
20130185727 | METHOD FOR MANAGING TASKS IN A MICROPROCESSOR OR IN A MICROPROCESSOR ASSEMBLY - This method includes steps for the parallel management of a first list and of a second list. The first list corresponds to a list of tasks to be carried out. The second list corresponds to a list of variables indicating the presence or absence of tasks to be carried out. The list of tasks is managed in a “FIFO” manner, that is to say that the first task inputted into the list is the first task to be executed. A task interruption is managed using a “Test And Set” function executed on the elements of the second list, the “Test And Set” function being a function which cannot be interrupted and including the following steps: reading the value of the element in question, storing the read value in a local memory, and assigning a predetermined value to the element which has just been read. | 07-18-2013 |
20130191832 | MANAGEMENT OF THREADS WITHIN A COMPUTING ENVIRONMENT - Threads of a computing environment are managed to improve system performance. Threads are migrated between processors to take advantage of single thread processing mode, when possible. As an example, inactive threads are migrated from one or more processors, potentially freeing-up one or more processors to execute an active thread. Active threads are migrated from one processor to another to transform multiple threading mode processors to single thread mode processors. | 07-25-2013 |
20130191833 | SYSTEM AND METHOD FOR ASSURING PERFORMANCE OF DATA SCRUBBING OPERATIONS - A method may include determining based on at least one data scrubbing parameter associated with at least one storage resource that the at least one storage resource is scheduled for a data scrubbing operation. The method may also include cause the at least one storage resource to transition from a low-power mode to a normal-power mode in order to perform a data scrubbing operation in response to a determination that the at least one storage resource is scheduled for a data scrubbing operation. The method may additionally include determining based on the at least one data scrubbing parameter that the data scrubbing operation is scheduled to cease. The method may further comprise causing the at least one storage resource to transition from the normal-power mode to the low-power mode in response to a determination that the data scrubbing operation is scheduled to cease. | 07-25-2013 |
20130191834 | CONTROL METHOD OF INFORMATION PROCESSING DEVICE - A method of controlling an information processing device includes selectively switching a first processor for executing a first operating system or a second processor for executing a second operating system to a user interface; storing a data table in which a first application program operating on the first operating system is associated with a second application program operating on the second operating system; sending information pertinent to activation of the first or second application program to a server device; receiving a result of a process from the server device, the process being performed by the server device for associating application programs based on the received information; updating the data table based on the received result; and activating the second application program, which is associated with the first application program being activated in the data table, in a state where the first processor has been switched to the user interface. | 07-25-2013 |
20130191835 | DISTRIBUTED PROCESSING DEVICE AND DISTRIBUTED PROCESSING SYSTEM - A distributed processing device includes an object storage unit that stores a continuation object including at least one of plural processes constituting a task and containing data of the task that is being processed, a processing unit that executes the continuation object retrieved from the object storage unit, and a storage processing unit that stores, in an execution state file, data stored in the object storage unit. | 07-25-2013 |
20130198750 | WIZARD-BASED SYSTEM FOR BUSINESS PROCESS SPECIFICATION - Method and systems assist non-programmer users in specifying business processes. Users submit high-level descriptions of simple, incomplete, or incorrect business processes in softcopy form illustrating the orchestration of services (or the control flow), and get prompted with suggestions to specify the services' data flow. The method and systems herein assist in specifying data flowing between services, but also detects missing edges and services, for which the method and systems herein also provide data flow suggestions. The suggestions are computed and ranked using heuristics, and displayed through a wizard to the user. | 08-01-2013 |
20130198751 | INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating. | 08-01-2013 |
20130198752 | INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating. | 08-01-2013 |
20130205298 | APPARATUS AND METHOD FOR MEMORY OVERLAY - A memory overlay apparatus includes an internal memory that includes a dirty bit indicating a changed memory area, a memory management unit that controls an external memory to store only changed data so that only data actually being used by a task during overlay is stored and restored, and a direct memory access (DMA) management unit that confirms the dirty bit when the task is changed and that moves a data area of the task between the internal memory and the external memory. | 08-08-2013 |
20130212584 | METHOD FOR DISTRIBUTED CACHING AND SCHEDULING FOR SHARED NOTHING COMPUTER FRAMEWORKS - In a distributed caching and scheduling method for a shared nothing computing framework, the framework includes an aggregator node and multiple computing nodes with local processor, storage unit and memory. The method includes separating a dataset into multiple data segments; distributing the data segments across the local storage units; and for each computing node, copying the data segment from the storage unit to the memory; processing the data segment to compute a partial result; and sending the partial result to the aggregator node. The method includes determining the data segment stored in local memory of computing nodes; and coordinating additional computing jobs based on the determination of the data segment stored in local memory. Coordinating can include scheduling new computing jobs using the data segment already stored in local memory, or to maximize the use of the data segments already stored in local memories. | 08-15-2013 |
20130212585 | DATA PROCESSING SYSTEM OPERABLE IN SINGLE AND MULTI-THREAD MODES AND HAVING MULTIPLE CACHES AND METHOD OF OPERATION - In some embodiments, a data processing system includes a processing unit, a first load/store unit LSU and a second LSU configured to operate independently of the first LSU in single and multi-thread modes. A first store buffer is coupled to the first and second LSUs, and a second store buffer is coupled to the first and second LSUs. The first store buffer is used to execute a first thread in multi-thread mode. The second store buffer is used to execute a second thread in multi-thread mode. The first and second store buffers are used when executing a single thread in single thread mode. | 08-15-2013 |
20130212586 | SHARED RESOURCES IN A DOCKED MOBILE ENVIRONMENT - A first and second data handling systems provides for shared resources in a docked mobile environment. The first data handling system maintains a set of execution tasks within the first data handling system having a system dock interface to physically couple to the second data handling system. The first data handling system assigns a task to be executed by the second data handling system while the two systems are physically coupled. | 08-15-2013 |
20130212587 | SHARED RESOURCES IN A DOCKED MOBILE ENVIRONMENT - Sharing resources in a docked mobile environment comprises maintaining a set of execution tasks within a first data handling system having a system dock interface to physically couple to a second data handling system and assigning a task to be executed by the second data handling system while the two systems are physically coupled. The described method further comprises detecting a physical decoupling of the first and second data handling systems and displaying an execution result of the task via a first display element of the first data handling system in response to such a detection. | 08-15-2013 |
20130212588 | PERSISTENT DATA STORAGE TECHNIQUES - A database is maintained that stores data persistently. Tasks are accepted from task sources. At least some of the tasks have competing requirements for use of regions of the database. Each of the regions includes data that is all either locked or not locked for writing at a given time. Each of the regions is associated with an available processor. For each of the tasks, jobs are defined each of which requires write access to regions that are to be accessed by no more than one of the processors. Jobs are distributed for concurrent execution by the associated processors. | 08-15-2013 |
20130212589 | Method and System for Controlling a Scheduling Order Per Category in a Music Scheduling System - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc.) with respect to particular slots within the scheduling order. The scheduling order can be configured to include a forward order per category or a reverse order per category with respect to the playback of the multimedia content in order to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations. A reverse scheduling technique provides an ideal rotation of songs when a pre-programmed show interferes with a normal rotation. Any rotational compromises can be buried in off-peak audience listening hours of the programming day using the disclosed reverse scheduling technique. | 08-15-2013 |
20130212590 | LOCK RESOLUTION FOR DISTRIBUTED DURABLE INSTANCES - A command log selectively logs commands that have the potential to create conflicts based on instance locks. Lock times can be used to distinguish cases where the instance is locked by the application host at a previous logical time from cases where the instance is concurrently locked by the application host through a different name. A logical command clock is also maintained for commands issued by the application host to a state persistence system, with introspection to determine which issued commands may potentially take a lock. The command processor can resolve conflicts by pausing command execution until the effects of potentially conflicting locking commands become visible and examining the lock time to distinguish among copies of a persisted state storage location. | 08-15-2013 |
20130219397 | Methods and Apparatus for State Objects in Cluster Computing - Embodiments of a mobile state object for storing and transporting job metadata on a cluster computing system may use a database as an envelope for the metadata. A state object may include a database that stores the job metadata and wrapper methods. A small database engine may be employed. Since the entire database exists within a single file, complex, extensible applications may be created on the same base state object, and the state object can be sent across the network with the state intact, along with history of the object. An SQLite technology database engine, or alternatively other single file relational database engine technologies, may be used as the database engine. To support the database engine, compute nodes on the cluster may be configured with a runtime library for the database engine via which applications or other entities may access the state file database. | 08-22-2013 |
20130219398 | METHOD FOR EXECUTING A UTILITY PROGRAM, COMPUTER SYSTEM AND COMPUTER PROGRAM PRODUCT - A method of executing a utility program on a computer system having a system management chip includes activating a graphics memory in the computer system, downloading a memory map including the utility program to be executed by the system management chip, storing the memory map in the graphics memory by the system management chip, copying the memory map from the graphics memory to a main memory in the computer system, and executing the utility program with a processor in the computer system. | 08-22-2013 |
20130219399 | MECHANISM FOR INSTRUCTION SET BASED THREAD EXECUTION OF A PLURALITY OF INSTRUCTION SEQUENCERS - In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed. | 08-22-2013 |
20130219400 | ENERGY-AWARE COMPUTING ENVIRONMENT SCHEDULER - A method includes receiving a process request, identifying a current state of a device in which the process request is to be executed, calculating a power consumption associated with an execution of the process request, and assigning an urgency for the process request, where the urgency corresponds to a time-variant parameter to indicate a measure of necessity for the execution of the process request. The method further includes determining whether the execution of the process request can be delayed to a future time or not based on the current state, the power consumption, and the urgency, and causing the execution of the process request, or causing a delay of the execution of the process request to the future time, based on a result of the determining. | 08-22-2013 |
20130227575 | SCHEDULING FUNCTION IN A WIRELESS CONTROL DEVICE - A field device for use in a process control system includes a scheduling module configured to receive a time input which specifies a scheduled time for performing a scheduled action or a scheduled sequence of actions and to receive an action input which specifies the scheduled action or the scheduled sequence of actions. At the scheduled time, the scheduling module automatically initiates the scheduled action or the scheduled sequence of actions. After initiating the action or the sequence of actions, the scheduling module causes an initiation status indicating that the action or the sequence of actions has been initiated to be sent to a host and/or causes the initiation status to be stored in a local memory of the field device. | 08-29-2013 |
20130227576 | METHOD AND APPARATUS FOR CONTROLLING TASK EXECUTION - A method and an apparatus for controlling task execution are disclosed in the present invention which relates to the field of wireless communications technologies, addressing the problem that power consumption of a terminal in standby mode is wasted because tasks of the terminal in a standby state are fixed and corresponding time periods cannot be flexibly set for different tasks. The method includes: receiving standby state parameters sent by a terminal management module; configuring the terminal according to the standby parameters so that the terminal enters a sleeping state; enabling a timer to start timing; stopping timing when the time of the timer reaches the time point at which a current task will be executed; configuring the terminal according to working state parameters of the terminal so that the terminal enters a working state; and receiving paging information sent by the terminal management module. | 08-29-2013 |
20130227577 | Automated Administration Using Composites of Atomic Operations - Various techniques for automatically administering software systems using composites of atomic operations are disclosed. One method, which can be performed by an automation server, involves accessing information representing an activity that includes a first operation and a second operation. The information indicates that the second operation processes a value that is generated by the first operation. The method generates a sequence number as well as an output structure, which associates the sequence number with an output value generated by the first operation, and an input structure, which associates the sequence number with an input value consumed by the second operation. The method sends a message, via a network, to an automation agent implemented on a computing device. The computing device implements a software target of the first operation. The message includes information identifying the first operation as well as the output structure. | 08-29-2013 |
20130227578 | Method and System for Controlling a Scheduling Order Per Category in a Music Scheduling System - A system and method for controlling a scheduling order per category is disclosed. A scheduling order can be designated for the delivery and playback of multimedia content (e.g., music, news, other audio, advertising, etc) with respect to particular slots within the scheduling order. The scheduling order can be configured to include a forward order per category or a reverse order per category with respect to the playback of the multimedia content in order to control the scheduling order for the eventual airplay of the multimedia content over a radio station or network of associated radio stations. A reverse scheduling technique provides an ideal rotation of songs when a pre-programmed show interferes with a normal rotation. Any rotational compromises can be buried in off-peak audience listening hours of the programming day using the disclosed reverse scheduling technique. | 08-29-2013 |
20130227579 | INFORMATION PROCESSING APPARATUS, COMPUTER PRODUCT, AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a computer configured to set respectively a storage location for each value of a common variable among threads of a thread group having write requests to write the values of the common variable of the threads in a given process, from a specific storage location defined in the write requests, to the storage locations respectively set for the threads; store, for each thread of the thread group, a value of the common variable to the storage location set for the thread; and read out in order of execution of the threads of the thread group defined in the given process and when all the threads in the thread group have ended, each value of the common variable stored at the first storing, and in the order of execution, overwrite a value in the specific storage location with each read value of the common variable. | 08-29-2013 |
20130227580 | Information Delivery Method and Device - The application discloses an information delivery method and device. The device includes: a receiving module configured to receive information to be delivered and a name of a target task which are sent from a source task; a searching module configured to search, according to the name of the target task, a preset global relationship table for a core number corresponding to the name of the target task; and a sending module configured to search a multi-core system for a core corresponding to the core number and to send the information to be delivered to the task corresponding to the name of the target task in the core. The information delivery method and device provided by the disclosure enable the delivery of information within a multi-core system or between multi-core systems with high reliability. | 08-29-2013 |
20130232495 | SCHEDULING ACCELERATOR TASKS ON ACCELERATORS USING GRAPHS - An application programming interface is provided that allows programmers to encapsulate snippets of executable code of a program into accelerator tasks. A graph is generated with a node corresponding to each of the accelerator tasks with edges that represent the data flow and data dependencies between the accelerator tasks. The generated graph is used by a scheduler to schedule the execution of the accelerator tasks across multiple accelerators. The application programming interface further provides an abstraction of the various memories of the accelerators called a datablock. The programmer can store and use data stored on the datablocks without knowing where on the accelerators the data is stored. The application programming interface can further schedule the execution of accelerator tasks to minimize the amount of data that is copied to and from the accelerators based on the datablocks and the generated graph. | 09-05-2013 |
20130247051 | IMPLEMENTATION OF A PROCESS BASED ON A USER-DEFINED SUB-TASK SEQUENCE - Various embodiments of systems and methods for implementation of a process based on a user-defined sub-task sequence are described herein. The process includes a set of sub-tasks. A plan owner defines the sequence in which the one or more sub-tasks are to be processed. In one embodiment, the plan owner defines the sequence by setting a fore-task for each sub-task in the sequence. The plan owner also defines a tester who would be processing each sub-task in the set of sub-tasks. A workflow template is triggered for implementing the process. The workflow template loops on the steps defined in the workflow template for processing each sub-task in the set of sub-tasks according to the sequence defined by the plan owner. Each tester after processing the sub-task assigned to them submits the processed sub-task. The process is implemented after all the sub-tasks in the set of sub-tasks have been processed. | 09-19-2013 |
20130247052 | Simulating Stream Computing Systems - A method, an apparatus and an article of manufacture for generating a synthetic workload for stream computing. The method includes generating an undirected graph that meets a node degree distribution parameter, obtaining a user-defined parameter for at least one feature for at least one stream computing application, and manipulating the undirected graph to generate a synthetic workload for stream computing in compliance with the user-defined parameter for the at least one feature for the at least one stream computing application. | 09-19-2013 |
20130247053 | TRANSACTION-BASED SHARED MEMORY PROTECTION FOR HIGH AVAILABILITY ENVIRONMENTS - Various systems and methods for implementing a transaction-based shared memory protection for high availability environments are described herein. A processing thread is executed, with the processing thread configured to access a multi-stage critical section, the multi-stage critical section haying a first and second stage, the first stage to store a staging area of a plurality of operations to be executed in the memory shared with at least one other processing thread, and the second stage o execute the operations from the staging area. The thread further configured to determine whether the staging area includes an indication of successfully completing the first stage and execute the operations when there is an indication of successfully completing the first stage. | 09-19-2013 |
20130247054 | GPU Distributed Work-Item Queuing - Methods and systems are provided for graphics processing unit distributed work-item queuing. One or more work-items of a wavefront are queued into a first level queue of a compute unit. When one or more additional work-items exist, a queuing of the additional work-items into a second level queue of the compute unit is performed. The queuing of the work-items into the first and second level queue is performed based on an assignment technique. | 09-19-2013 |
20130247055 | Automatic Execution of Actionable Tasks - Provided is a method for automatic execution of actionable tasks, which facilitates the creation of a platform for one-point management of multiple activities and events by enabling automatic performance of various tasks associated with sending wishes and gifts, travel check-ins, travel planning, banking, dining out, making reservations, and other activities. The method may utilize data associated with events or activities from one or more input sources. The method may include identifying one or more actionable tasks, creating one or more automatically executable tasks based on the one or more actionable tasks, executing the created automatically executable actionable tasks, and presenting the results to the user. | 09-19-2013 |
20130247056 | VIRTUAL MACHINE CONTROL METHOD AND VIRTUAL MACHINE - A virtual machine control method and a virtual machine having the dual objectives of utilizing NIC on a virtual machine that creates sub-virtual machines operated by a VMM on virtual machines generated by a hypervisor to avoid software copying by the VMM and to prevent band deterioration during live migration or adding sub-virtual machines. In a virtual machine operating plural virtualization software on a physical machine including a CPU, memory, and multi-queue NIC; a virtual multi-queue NIC is loaded in the virtual machine, for virtual queues included in the virtual multi-queue NIC, physical queues configuring the multi-queue NIC are assigned to virtual queues where usage has started, and the physical queues are allowed direct access to the virtual machine memory. | 09-19-2013 |
20130247057 | MULTI-TASK PROCESSING APPARATUS - A multi-task processing apparatus includes a sequencer for switching and processing multiple task data; a memory for storing the task data, wherein the memory stores/reads the task data between a volatile memory cell and a plurality of associated non-volatile memory cells when the task data is switched. | 09-19-2013 |
20130247058 | SYSTEM FOR SCHEDULING THE EXECUTION OF TASKS BASED ON LOGICAL TIME VECTORS - A method for scheduling interdependent tasks on a multi-task system includes: associating to each task a logical time vector indicative of the current occurrence of the task and the occurrences of other tasks on which the current occurrence depends; defining a partial order on the set of logical time vectors, such that a first vector is greater than a second vector if all first vector components are greater or equal to the respective second vector components, and at least one component of the first vector is strictly greater than the respective component of the second vector; after an execution of a task, updating its logical time vector for a new occurrence by incrementing at least one component of the vector; comparing the logical time vectors according to the partial order relation; and executing each task having a logical time vector smaller than all other logical time vectors. | 09-19-2013 |
20130254772 | VERIFICATION OF COMPLEX WORKFLOWS THROUGH INTERNAL ASSESSMENT OR COMMUNITY BASED ASSESSMENT - A method of implementing verification of a complex workflow includes partitioning the workflow into modules, wherein the modules have inputs, processing steps and outputs; selecting, from the workflow, one of the partitioned modules for independent verification by challenge thereof; running, with a computing device, a challenge of the selected module, the challenge comprising comparing reference outputs to outputs of the selected module, wherein reference inputs are received by the selected module and the reference outputs are generated using the reference inputs and one of an ideal performing module or a well-established module; determining whether outputs of the selected module meet verification criteria with respect to the reference outputs, and based on the determining, implementing one of: declaring the selected module verified; subdividing the selected module into smaller modules and repeating the challenge on the smaller modules; or declaring the selected module not verified. | 09-26-2013 |
20130254773 | CONTROL APPARATUS, CONTROL METHOD, COMPUTER PROGRAM PRODUCT, AND SEMICONDUCTOR DEVICE - According to an embodiment, a control apparatus for controlling a target device includes an estimation unit and an issuing unit. The estimation unit is configured to estimate a second amount of energy required for the entire system including the target device and the control apparatus until the target device completes an execution of its function that is requested in accordance with an execution request for the target device. The issuing unit is configured to issue a control command for causing the target device to execute its function in accordance with the execution request, when the first amount of energy at a time point of receiving the execution request is greater than the second amount of energy. | 09-26-2013 |
20130254774 | METHOD AND SYSTEM FOR AUTONOMIC APPLICATION PROGRAM SPAWNING IN A COMPUTING ENVIRONMENT - A method and system for self-managing an application program in a computing environment, is provided. One implementation involves spawning a primary application for execution in the computing environment; the primary application monitoring status of the primary application and the computing environment resources while executing; and upon detecting a first status threshold, the primary application spawning a secondary application in the computing environment, wherein the secondary application comprises a lower functionality version of the primary application, and the primary application terminating. | 09-26-2013 |
20130263138 | Collectively Loading An Application In A Parallel Computer - Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job. | 10-03-2013 |
20130263139 | MANAGING EXECUTION OF APPLICATIONS IN A RUNTIME ENVIRONMENT - Systems, methods and techniques relating to managing execution of applications in a runtime environment are described. A described technique includes identifying logic for executing an application code, identifying a first portion of the application code associated with the identified logic and executed by a first runtime container, identifying a second portion of the application code associated with the identified logic, determining, based on a policy or a characteristic associated with the application code, a second runtime container to execute the second portion of the application code, and dispatching a request and the identified logic to the second runtime container for executing the second portion of the application code. | 10-03-2013 |
20130263140 | WINDOW-BASED SCHEDULING USING A KEY-VALUE DATA STORE - A scheduling system for scheduling executions of tasks within a distributed computing system may include an entry generator configured to store, using at least one key-value data store, time windows for scheduled executions of tasks therein using a plurality of nodes of the distributed computing system. The entry generator may be further configured to generate scheduler entries for inclusion within a time window of the time windows, each scheduler entry identifying a task of the tasks and an associated schedule for execution thereof. The system may further include an execution engine configured to select the time window and execute corresponding tasks of the included scheduler entries in order. | 10-03-2013 |
20130263141 | Visibility Ordering in a Memory Model for a Unified Computing System - Provided is a method of permitting the reordering of a visibility order of operations in a computer arrangement configured for permitting a first processor and a second processor threads to access a shared memory. The method includes receiving in a program order, a first and a second operation in a first thread and permitting the reordering of the visibility order for the operations in the shared memory based on the class of each operation. The visibility order determines the visibility in the shared memory, by a second thread, of stored results from the execution of the first and second operations. | 10-03-2013 |
20130263142 | CONTROL DEVICE, CONTROL METHOD, COMPUTER READABLE RECORDING MEDIUM IN WHICH PROGRAM IS RECORDED, AND DISTRIBUTED PROCESSING SYSTEM - If there are a plurality of tasks to be performed for one divided data among a plurality of divided data obtained by dividing data, an allocating controller that allocates the plurality of tasks commonly to one of a plurality of processors is provided so that a processing speed is improved. | 10-03-2013 |
20130263143 | INFORMATION PROCESSING METHOD AND SYSTEM - An information processing method, which automates operation tasks in an information processing system using a workflow which represents an operation procedure of the information processing system by connection of a plurality of nodes, the method includes: creating a new workflow including at least a first node which executes control processing of the information processing system and a second node to which is added a form including a screen component which performs at least of input and output to information relating to the control processing; detecting a pattern of the new workflow to be created; deciding a screen component to be arranged on a form screen of the new workflow, based on the pattern of the new workflow and the content of control processing of the first node in the new workflow; and arranging the screen component of the new workflow on the form screen. | 10-03-2013 |
20130263144 | System Call Queue Between Visible and Invisible Computing Devices - Embodiments described herein include a system, a computer-readable medium and a computer-implemented method for processing a system call (SYSCALL) request. The SYSCALL request from an invisible processing device is stored in a queueing mechanism that is accessible to a visible processing device, where the visible processing device is visible to an operating system and the invisible processing device is invisible to the operating system. The SYSCALL request is processed using the visible processing device, and the invisible processing device is notified using a notification mechanism that the SYSCALL request was processed. | 10-03-2013 |
20130263145 | METHOD AND APPARATUS FOR EFFICIENT INTER-THREAD SYNCHRONIZATION FOR HELPER THREADS - A monitor bit per hardware thread in a memory location may be allocated, in a multiprocessing computer system having a plurality of hardware threads, the plurality of hardware threads sharing the memory location, and each of the allocated monitor bit corresponding to one of the plurality of hardware threads. A condition bit may be allocated for each of the plurality of hardware threads, the condition bit being allocated in each context of the plurality of hardware threads. In response to detecting the memory location being accessed, it is determined whether a monitor bit corresponding to a hardware thread in the memory location is set. In response to determining that the monitor bit corresponding to a hardware thread is set in the memory location, a condition bit corresponding to a thread accessing the memory location is set in the hardware thread's context. | 10-03-2013 |
20130263146 | EVENT DRIVEN SENDFILE - An apparatus includes an application module to accept a file transfer request from a client application and a sendfile module, coupled to the application module, which is executable by a processor. The sendfile module assigns a first worker thread to transfer a requested file to the client application and detect an idle time of the first worker thread. In response to detecting the idle time, the sendfile module assigns the file transfer request to a shared poller thread shared by a plurality of file transfer requests and releases the first worker thread. | 10-03-2013 |
20130268936 | WORKFLOW MANAGEMENT SYSTEM AND METHOD - A workflow management system and a method for managing a procedure of delivery of a workflow object within an organizational framework are introduced. The workflow management system includes a metadata database, an input module and an authorizing module. Given definition of route points in a workflow template, configuration of nodes of each of the route points, and configuration of group data within a workflow template, it is feasible to effectuate a pre-built model whereby conventional workflows can be flexibly corrected and assembled anew, thereby dispensing with the hassles of redefining a workflow or performing a time-consuming process of amending the workflow route points one by one, thereby achieving advantages of smart workflow automated design, such as centralized control, dynamic interception, and quick extension of a secondary workflow route. | 10-10-2013 |
20130268937 | DISTRIBUTED PROCESSING SYSTEM, SCHEDULER NODE AND SCHEDULING METHOD OF DISTRIBUTED PROCESSING SYSTEM, AND PROGRAM GENERATION APPARATUS THEREOF - A distributed processing system includes a plurality of task nodes each configured to have a capability of processing a task using a reconfigurable processor, and having a capability of processing the task using a non-reconfigurable processor if the task is not processed using the reconfigurable processor, and a scheduler node configured to select a task node that is to process the task from the plurality of task nodes. | 10-10-2013 |
20130268938 | TRANSPARENT USER MODE SCHEDULING ON TRADITIONAL THREADING SYSTEMS - Embodiments for performing cooperative user mode scheduling between user mode schedulable (UMS) threads and primary threads are disclosed. In accordance with one embodiment, an asynchronous procedure call (APC) is received on a kernel portion of a user mode schedulable (UMS) thread. The status of the UMS thread as it is being processed in a multi-processor environment is determined. Based on the determined status, the APC is processed on the UMS thread. | 10-10-2013 |
20130268939 | SYSTEMS AND METHODS FOR TASK EXECUTION ON A MANAGED NODE - Systems and methods for executing tasks on a managed node remotely coupled to a management node are provided. A management controller of the management node may be configured to determine at least one execution policy for a task, schedule the task for execution, receive system information data from the managed node, based at least on the received system information, determine if the received system information complies with the at least one execution policy, and if the received information complies with the at least one execution policy, forward the task from the management controller to the managed node for execution. | 10-10-2013 |
20130283277 | THREAD MIGRATION TO IMPROVE POWER EFFICIENCY IN A PARALLEL PROCESSING ENVIRONMENT - A method and system to selectively move one or more of a plurality threads which are executing in parallel by a plurality of processing cores. In one embodiment, a thread may be moved from executing in one of the plurality of processing cores to executing in another of the plurality of processing cores, the moving based on a performance characteristic associated with the plurality of threads. In another embodiment of the invention, a power state of the plurality of processing cores may be changed to improve a power efficiency associated with the executing of the multiple threads. | 10-24-2013 |
20130283278 | Apparatus And Methods For Performing Computer System Maintenance And Notification Activities In An Opportunistic Manner - A computer-readable medium tangibly embodying a program of machine-readable instructions executable by a digital processor of a computer system to perform operations for controlling computer system activities. The operations include receiving a command entered with an input device of the computer system to begin opportunistic computer system activities, where the command specifies a time period available for opportunistic computer system activities. Then initiating at least one computer system activity during the time period available for opportunistic computer system activities. | 10-24-2013 |
20130283279 | INTEGRATION OF DISSIMILAR JOB TYPES INTO AN EARLIEST DEADLINE FIRST (EDF) SCHEDULE - A method for implementation within a scheduler for a processor is described. The method includes receiving a plurality of jobs from an earliest deadline first (EDF) schedule, wherein the scheduler implements an EDF scheduling model. The method also includes receiving a separate job from a source other than the EDF schedule. The separate job has a fixed scheduling requirement with a specific execution time. The method also includes determining an amount of available utilization capacity of the processor and inserting the separate job into an execution plan of the processor with the plurality of jobs from the EDF schedule in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. | 10-24-2013 |
20130283280 | METHOD TO REDUCE MULTI-THREADED PROCESSOR POWER CONSUMPTION - Aspects of the disclosure generally relate to methods and apparatus for wireless communication. In an aspect, a method for dynamically processing data on interleaved multithreaded (MT) systems is provided. The method generally includes monitoring loading on one or more active processor threads, determining whether to remove a task or create an additional task based on the monitored loading of the one or more active processor threads and a number of tasks running on one or more of the one or more active processor threads, and if a determination is made to remove a task or create an additional task, distributing the resulting tasks among one or more available processor threads. | 10-24-2013 |
20130283281 | Deploying Trace Objectives using Cost Analyses - A tracing management system may use cost analyses and performance budgets to dispatch tracing objectives to instrumented systems that may collect trace data while running an application. The tracing management system may analyze individual tracing workloads for processing, storage, and network performance costs, and select workloads to deploy based on a resource budget that may be set for a particular device. In some cases, complementary tracing objectives may be selected that maximize consumption of resources within an allocated budget. The budgets may allocate certain resources for tracing, which may be a mechanism to limit any adverse effects from tracing when running an application. | 10-24-2013 |
20130283282 | COMPONENT-SPECIFIC DISCLAIMABLE LOCKS - Systems and methods of protecting a shared resource in a multi-threaded execution environment in which threads are permitted to transfer control between different software components, for any of which a disclaimable lock having a plurality of orderable locks can be identified. Back out activity can be tracked among a plurality of threads with respect to the disclaimable lock and the shared resource, and reclamation activity among the plurality of threads may be ordered with respect to the disclaimable lock and the shared resource. | 10-24-2013 |
20130283283 | PORTABLE ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR - An application usage history of a plurality of applications installed on the portable electronic device is logged. The application usage history includes any combination of when the applications are launched, where the application are launched and application launch patterns. Any combination of a current time information, a current location information and a current application launch pattern are obtained. The applications are selected to provide the application list including at least one application that is possible to be launched according to the application usage history and any combination of the current time information, the current location information and the current application launch pattern. | 10-24-2013 |
20130290966 | OPERATOR GRAPH CHANGES IN RESPONSE TO DYNAMIC CONNECTIONS IN STREAM COMPUTING APPLICATIONS - A stream computing application may permit one job to connect to a data stream of a different job. As more and more jobs dynamically connect to the data stream, the connections may have a negative impact on the performance of the job that generates the data stream. Accordingly, a variety of metrics and statistics (e.g., CPU utilization or tuple rate) may be monitored to determine if the dynamic connections are harming performance. If so, the stream computing system may be optimized to mitigate the effects of the dynamic connections. For example, particular operators may be unfused from a processing element and moved to a compute node that has available computing resources. Additionally, the stream computing application may clone the data stream in order to distribute the workload of transmitting the data stream to the connected jobs. | 10-31-2013 |
20130290967 | System and Method for Implementing NUMA-Aware Reader-Writer Locks - NUMA-aware reader-writer locks may leverage lock cohorting techniques to band together writer requests from a single NUMA node. The locks may relax the order in which the lock schedules the execution of critical sections of code by reader threads and writer threads, allowing lock ownership to remain resident on a single NUMA node for long periods, while also taking advantage of parallelism between reader threads. Threads may contend on node-level structures to get permission to acquire a globally shared reader-writer lock. Writer threads may follow a lock cohorting strategy of passing ownership of the lock in write mode from one thread to a cohort writer thread without releasing the shared lock, while reader threads from multiple NUMA nodes may simultaneously acquire the shared lock in read mode. The reader-writer lock may follow a writer-preference policy, a reader-preference policy or a hybrid policy. | 10-31-2013 |
20130290968 | ADJUSTMENT OF A TASK EXECUTION PLAN AT RUNTIME - Embodiments of adjustment of a task execution plan at runtime by a task execution engine configured to receive a plan compilation task, the plan compilation task comprising a task execution plan, are provided. An aspect includes receiving a first plan compilation task by the task execution engine through a plan compilation interface. Another aspect includes modifying a task execution plan of the first plan compilation task in response to receiving a second plan compilation task by the task execution engine, the second plan compilation task comprising a task execution plan for modifying the task execution plan of the first plan compilation task. Yet another aspect includes reading a next task in the task execution plan of the first plan compilation task and initiating the next task by the task execution engine. | 10-31-2013 |
20130290969 | OPERATOR GRAPH CHANGES IN RESPONSE TO DYNAMIC CONNECTIONS IN STREAM COMPUTING APPLICATIONS - A stream computing application may permit one job to connect to a data stream of a different job. As more and more jobs dynamically connect to the data stream, the connections may have a negative impact on the performance of the job that generates the data stream. Accordingly, a variety of metrics and statistics (e.g., CPU utilization or tuple rate) may be monitored to determine if the dynamic connections are harming performance. If so, the stream computing system may be optimized to mitigate the effects of the dynamic connections. For example, particular operators may be unfused from a processing element and moved to a compute node that has available computing resources. Additionally, the stream computing application may clone the data stream in order to distribute the workload of transmitting the data stream to the connected jobs. | 10-31-2013 |
20130290970 | UNIPROCESSOR SCHEDULABILITY TESTING FOR NON-PREEMPTIVE TASK SETS - A method of determining schedulability of tasks for uniprocessor execution includes defining a well-formed, non-preemptive task set having a plurality of tasks, each task having at least one subtask. A determination of whether the task set is schedulable is made, such that a near-optimal amount of temporal resources required to execute the task set is estimated. Further, a method of determining schedulability of a subtask for uniprocessor execution includes defining a well-formed, non-preemptive task set having a plurality of tasks, each task having at least one subtask. A determination of whether a subtask in the task set is schedulable at a specific time is made in polynomial time. Systems for implementing such methods are also provided. | 10-31-2013 |
20130290971 | Scheduling Thread Execution Based on Thread Affinity - In accordance with some embodiments, spatial and temporal locality between threads executing on graphics processing units may be analyzed and tracked in order to improve performance. In some applications where a large number of threads are executed and those threads use common resources such as common data, affinity tracking may be used to improve performance by reducing the cache miss rate and to more effectively use relatively small-sized caches. | 10-31-2013 |
20130298129 | CONTROLLING A SEQUENCE OF PARALLEL EXECUTIONS - An apparatus having a first circuit and a plurality of second circuits is disclosed. The first circuit may be configured to dispatch a plurality of sets in a sequence. Each set generally includes a plurality of instructions. The second circuits may be configured to (i) execute the sets during a plurality of execution cycles respectively and (ii) stop the execution in a particular one of the second circuits during one or more of the execution cycles in response to an expiration of a particular counter that corresponds to the particular second circuit. | 11-07-2013 |
20130298130 | AUTOMATIC PIPELINING FRAMEWORK FOR HETEROGENEOUS PARALLEL COMPUTING SYSTEMS - Systems and methods for automatic generation of software pipelines for heterogeneous parallel systems (AHP) include pipelining a program with one or more tasks on a parallel computing platform with one or more processing units and partitioning the program into pipeline stages, wherein each pipeline stage contains one or more tasks. The one or more tasks in the pipeline stages are scheduled onto the one or more processing units, and execution times of the one or more tasks in the pipeline stages are estimated. The above steps are repeated until a specified termination criterion is reached. | 11-07-2013 |
20130298131 | CONTINUOUS OPTIMIZATION OF ARCHIVE MANAGEMENT SCHEDULING BY USE OF INTEGRATED CONTENT-RESOURCE ANALYTIC MODEL - A method and associated system for continuously optimizing data archive management scheduling. A flow network is modeled. The flow network represents data content, software programs, physical devices, and communication capacity of the archive management system in various levels of vertices such that an optimal path in the flow network from a task of at least one archive management task to a worker program of the archive management system represents an optimal initial schedule for the worker program to perform the task. | 11-07-2013 |
20130298132 | MULTI-CORE PROCESSOR SYSTEM AND SCHEDULING METHOD - A multi-core processor system includes plural processors; and a scheduler that assigns applications to the processors. The scheduler upon receiving a startup request for a given application and based on start times of the applications executed by the processors, selects a processor that is to execute the given application. | 11-07-2013 |
20130305250 | METHOD AND SYSTEM FOR MANAGING NESTED EXECUTION STREAMS - One embodiment of the present disclosure sets forth an enhanced way for GPUs to queue new computational tasks into a task metadata descriptor queue (TMDQ). Specifically, memory for context data is pre-allocated when a new TMDQ is created. A new TMDQ may be integrated with an existing TMDQ, where computational tasks within that TMDQ include task from each of the original TMDQs. A scheduling operation is executed on completion of each computational task in order to preserve sequential execution of tasks without the use of atomic locking operations. One advantage of the disclosed technique is that GPUs are enabled to queue computational tasks within TMDQs, and also create an arbitrary number of new TMDQs to any arbitrary nesting level, without intervention by the CPU. Processing efficiency is enhanced where the GPU does not wait while the CPU creates and queues tasks. | 11-14-2013 |
20130305251 | SCHEDULING METHOD AND SCHEDULING SYSTEM - A scheduling method is performed by a scheduler that manages plural processors including a first processor and a second processor. The scheduling method includes assigning an application to the first processor when the application is started; instructing the second processor to calculate load of the processors; and maintaining assignment of the application or changing assignment of the application based on the load. | 11-14-2013 |
20130311995 | Resolving RCU-Scheduler Deadlocks - A technique for resolving deadlocks between an RCU subsystem and an operating system scheduler. An RCU reader manipulates a counter when entering and exiting an RCU read-side critical section. At the entry, the counter is incremented. At the exit, the counter is manipulated differently depending on the counter value. A first counter manipulation path is taken when the counter indicates a task-context RCU reader is exiting an outermost RCU read-side critical section. This path includes condition-based processing that may result in invocation of the operating system scheduler. The first path further includes a deadlock protection operation that manipulates the counter to prevent an intervening RCU reader from taking the same path. The second manipulation path is taken when the counter value indicates a task-context RCU reader is exiting a non-outermost RCU read-side critical section, or an RCU reader is nested within the first path. This path bypasses the condition-based processing. | 11-21-2013 |
20130311996 | MECHANISM FOR WAKING COMMON RESOURCE REQUESTS WITHIN A RESOURCE MANAGEMENT SUBSYSTEM - One embodiment of the present disclosure sets forth an effective way to maintain fairness and order in the scheduling of common resource access requests related to replay operations. Specifically, a streaming multiprocessor (SM) includes a total order queue (TOQ) configured to schedule the access requests over one or more execution cycles. Access requests are allowed to make forward progress when needed common resources have been allocated to the request. Where multiple access requests require the same common resource, priority is given to the older access request. Access requests may be placed in a sleep state pending availability of certain common resources. Deadlock may be avoided by allowing an older access request to steal resources from a younger resource request. One advantage of the disclosed technique is that older common resource access requests are not repeatedly blocked from making forward progress by newer access requests. | 11-21-2013 |
20130311997 | Systems and Methods for Integrating Third Party Services with a Digital Assistant - The electronic device with one or more processors and memory receives an input of a user. The electronic device, in accordance with the input, identifies a respective task type from a plurality of predefined task types associated with a plurality of third party service providers. The respective task type is associated with at least one third party service provider for which the user is authorized and at least one third party service provider for which the user is not authorized. In response to identifying the respective task type, the electronic device sends a request to perform at least a portion of a task to a third party service provider of the plurality of third party service providers that is associated with the respective task type. | 11-21-2013 |
20130318530 | DEADLOCK/LIVELOCK RESOLUTION USING SERVICE PROCESSOR - A microprocessor includes a main processor and a service processor. The service processor is configured to detect and break a deadlock/livelock condition in the main processor. The service processor detects the deadlock/livelock condition by detecting the main processor has not retired an instruction or completed a processor bus transaction for a predetermined number of clock cycles. In response to detecting the deadlock/livelock condition in the main processor, the service processor causes arbitration requests to a cache memory to be captured in a buffer, analyzes the captured requests to detect a pattern that may indicate a bug causing the condition and performs actions associated with the pattern to break the deadlock/livelock. The actions include suppression of arbitration requests to the cache, suppression of comparisons cache request addresses and killing requests to access the cache. | 11-28-2013 |
20130318531 | Domain Bounding For Symmetric Multiprocessing Systems - Methods and apparatuses for bounding the processing domain in a symmetric multiprocessing system are provided. In various implementations, a particular computational task is “affined” to a particular processing unit. Subsequently, when the particular task is executed, the symmetric multiprocessing operating system ensures that the affined processing unit processes the instruction. When the affined processing unit is not processing the particular computational task, the symmetric multiprocessing operating system may cause the processing unit to process alternate instructions. With some implementations, a particular computational task is “linked” to a particular processing unit. Subsequently, when the particular task is executed, the symmetric multiprocessing operating system ensures that the bound processing unit processes the instruction. When the bound processing unit is not processing the particular computational instruction, the bound processing unit may enter a low power or idle state. | 11-28-2013 |
20130318532 | COMPUTER PRODUCT, EXECUTION CONTROL DEVICE, AND EXECUTION CONTROL METHOD - A computer-readable recording medium stores an execution control program that causes a computer to execute a process that includes receiving an execution request for a given operation for a system; detecting number of operations that are of a type identical to that of the given operation and are under execution by a computing device that is in the system and involved in the execution of the given operation for which the execution request is received; comparing the number of operations detected at the detecting and the number of operations that are of the type and simultaneously executable by the computing device such that the execution of the given operation is completed within a given period by the computing device; and assigning the given operation to the computing device, based on a result of comparison at the comparing. | 11-28-2013 |
20130326523 | Resource Sharing Aware Task Partitioning for Multiprocessors - A multi processor task allocation method is described that considers task dependencies while performing task allocation in order to avoid blocking of a task's execution while waiting for the resolution of the dependency. While allocating the tasks to the processors the potential blocking time is considered, and the best allocation that will have the least amount of blocking time is found. | 12-05-2013 |
20130326524 | Method and System for Synchronization of Workitems with Divergent Control Flow - Disclosed methods, systems, and computer program products embodiments include synchronizing a group of workitems on a processor by storing a respective program counter associated with each of the workitems, selecting at least one first workitem from the group for execution, and executing the selected at least one first workitem on the processor. The selecting is based upon the respective stored program counter associated with the at least one first workitem. | 12-05-2013 |
20130326525 | CONTROL DEVICE - A control device for a function execution apparatus includes: a determination unit which, when a target function is selected from the plurality of functions, determines whether the function execution apparatus can execute the target function by using first data, based on the target function; and a processing execution unit which, when the function execution apparatus can execute the target function by using the first data, executes first processing for enabling the function execution apparatus to execute the target function by using the first data, and when the function execution apparatus is unable to execute the target function by using the first data, executes second processing for supplying second data to the function execution apparatus for enabling the function execution apparatus to execute the target function by using the second data converted from the first data. | 12-05-2013 |
20130326526 | INFORMATION PROCESSING APPARATUS, WORKFLOW GENERATING SYSTEM, AND WORKFLOW GENERATING METHOD - An information processing apparatus for generating a workflow including one or more steps each indicating a process to be executed, includes a workflow display unit configured to display, on a display, one or more graphical representations corresponding to one or more steps of the workflow; a step management unit configured to obtain attribute data associated with a step to be added in response to an instruction for adding the step to the workflow; and an auxiliary indication control unit configured to cause the display to display a graphical representation corresponding to the step to be added and a graphical representation that reflects the attribute data. | 12-05-2013 |
20130326527 | SCHEDULING METHOD, SYSTEM DESIGN SUPPORT METHOD, AND SYSTEM - A scheduling method is executed by a processor, and includes detecting a transition from a first process to a second process; acquiring from memory, an operating frequency and a CPU count for executing the second process; suspending a CPU under operation or starting a suspended CPU, based on the CPU count; and assigning the operating frequency to a CPU that is to execute the second process. | 12-05-2013 |
20130332930 | INFORMATION PROCESSING SYSTEM, IMAGE FORMING APPARATUS, CONTROL METHOD, AND RECORDING MEDIUM - A flow service server group manages a job consisting of multiple tasks generated according to a user request, and a task server acquires a task included in the aforementioned managed job if a processing standby status exists, and carries out specific task processing. The task server notifies the flow service server group at a fixed interval that task processing is in progress. The flow service server group then issues a command to the task server that has not completed task processing within a prescribed time to suspend the task processing, and issues a command to a task server capable of task processing that is identical to the task processing to alternatively execute the task processing. | 12-12-2013 |
20130332931 | System and Method for Limiting the Impact of Stragglers in Large-Scale Parallel Data Processing - A large-scale data processing system and method including a plurality of processes, wherein a master process assigns input data blocks to respective map processes and partitions of intermediate data are assigned to respective reduce processes. In each of the plurality of map processes an application-independent map program retrieves a sequence of input data blocks assigned thereto by the master process and applies an application-specific map function to each input data block in the sequence to produce the intermediate data and stores the intermediate data in high speed memory of the interconnected processors. Each of the plurality of reduce processes receives a respective partition of the intermediate data from the high speed memory of the interconnected processors while the map processes continue to process input data blocks an application-specific reduce function is applied to the respective partition of the intermediate data to produce output values. | 12-12-2013 |
20130332932 | COMMAND CONTROL METHOD - A computer determines whether a first system call waiting for a response exists upon entry of a command to the computer, which first system call has been issued based on a different command entered into the computer prior to the command. In the case where the first system call exists, the computer carries out a process corresponding to the command with the use of a response result for the first system call. On the other hand, if the first system call does not exist, the computer issues a second system call. | 12-12-2013 |
20130332933 | PERFORMANCE MONITORING RESOURCES PROGRAMMED STATUS - A system and method for a performance monitoring hardware unit that may include logic to poll one or more performance monitoring shared resources and determine a status of each performance monitoring shared resource. The performance monitoring hardware unit may also include an interface to provide the status to allow programming of the one or more performance monitoring shared resource. The status may correspond to a usage and/or an errata condition. Thus, the performance monitoring hardware unit may prevent programming conflicts of the one or more performance monitoring shared resources. | 12-12-2013 |
20130332934 | Task Control in a Computing System - A computing system can include sensor and a task. The sensor can generate sensor data. The computing system can delay the task based on the sensor data. | 12-12-2013 |
20130339963 | TRANSACTION ABORT PROCESSING - A transaction executing within a computing environment ends prior to completion; i.e., execution is aborted. Pursuant to aborting execution, a hardware transactional execution CPU mode is exited, and one or more of the following is performed: restoring selected registers; committing nontransactional stores on abort; branching to a transaction abort program status word specified location; setting a condition code and/or abort code; and/or preserving diagnostic information. | 12-19-2013 |
20130339964 | REPLAYING OF WORK ACROSS CLUSTER OF DATABASE SERVERS - The replaying of work across a database servers includes: receiving a global time by each of a plurality of replay dispatchers; calculating, for each given replay dispatcher, a time offset using a local time for the given replay dispatcher and the global time; receiving, for each given replay dispatcher, a replay workload comprising a plurality of replay records and a global replay start time, wherein each of the plurality of replay records comprises an expected wait time; calculating, for each given replay dispatcher, a wait time for each given replay record based on the expected wait time for the given replay record, the global replay start time, and the time offset for the given replay dispatcher; and submitting, for each given replay dispatcher, the replay records to a target database server for processing in an order according to the calculated wait times. | 12-19-2013 |
20130339965 | SEQUENTIAL COOPERATION BETWEEN MAP AND REDUCE PHASES TO IMPROVE DATA LOCALITY - Methods and arrangements for task scheduling. At least one job is assimilated from at least one node, each job comprising at least a map phase and a reduce phase, each of the map and reduce phases comprising at least one task. Progress of a map phase of at least one job is compared with progress of a reduce phase of at least one job. Launching of a task of a reduce phase of at least one job is scheduled in response to progress of the reduce phase of at least one job being less than progress of the map phase of at least one job. | 12-19-2013 |
20130339966 | SEQUENTIAL COOPERATION BETWEEN MAP AND REDUCE PHASES TO IMPROVE DATA LOCALITY - Methods and arrangements for task scheduling. At least one job is assimilated from at least one node, each job comprising at least a map phase and a reduce phase, each of the map and reduce phases comprising at least one task. Progress of a map phase of at least one job is compared with progress of a reduce phase of at least one job. Launching of a task of a reduce phase of at least one job is scheduled in response to progress of the reduce phase of at least one job being less than progress of the map phase of at least one job. | 12-19-2013 |
20130339967 | CONSTRAINED TRANSACTION EXECUTION - Constrained transactional processing is provided. A constrained transaction is initiated by execution of a Transaction Begin constrained instruction. The constrained transaction has a number of restrictions associated therewith. Absent violation of a restriction, the constrained transaction is to complete. If an abort condition is encountered, the transaction is re-executed starting at the Transaction Begin instruction. Violation of a restriction may cause an interrupt. | 12-19-2013 |
20130346984 | Sparse Threaded Deterministic Lock-Free Cholesky and LDLT Factorizations - Systems and methods are provided for implementing a sparse deterministic direct solver. The deterministic direct solver is configured to identify at least one task for each of a plurality of dense blocks, identify operations on which the tasks are dependent, store in a first data structure an entry for each of the dense blocks identifying whether a precondition must be satisfied before tasks associated with the dense blocks can be initiated, store in a second data structure a status value for each of the dense blocks that is changeable by multiple threads, and assign the tasks to a plurality of threads, wherein the threads execute their assigned task when the status of the dense block corresponding to their assigned task indicates that the assigned task is ready to be performed and the precondition associated with the dense block has been satisfied if the precondition exists. | 12-26-2013 |
20130346985 | MANAGING USE OF A FIELD PROGRAMMABLE GATE ARRAY BY MULTIPLE PROCESSES IN AN OPERATING SYSTEM - Field programmable gate arrays can be used as a shared programmable co-processor resource in a general purpose computing system. An FPGA can be programmed to perform functions, which in turn can be associated with one or more processes. With multiple processes, the FPGA can be shared, and a process is assigned to at least one portion of the FPGA during a time slot in which to access the FPGA. Programs written in a hardware description language for programming the FPGA are made available as a hardware library. The operating system manages allocating the FPGA resources to processes, programming the FPGA in accordance with the functions to be performed by the processes using the FPGA, and scheduling use of the FPGA by these processes. | 12-26-2013 |
20130346986 | JOB SCHEDULING PROGRESS BAR - Methods and apparatus, including computer program products, are provided for scheduling batch jobs. In one aspect there is provided a method. The method may receiving, at a progress engine, status information provided by a job scheduler controlling an execution of a plurality of jobs, the status information representative of the plurality of jobs of the batch job; receiving, at the progress engine implemented on at least one processor, reference information representative of past executions of batch jobs; determining, by the progress engine, a completion time for the batch job based on the received status information and the received reference information; and generating, by the progress engine, a page including the determined completion time. Related systems, methods, and articles of manufacture are also disclosed. | 12-26-2013 |
20130346987 | SYSTEMS AND METHODS FOR DISTRIBUTING TASKS AND/OR PROCESSING RECOURCES IN A SYSTEM - A method is provided for managing the execution of tasks by a system having multiple processors, each having multiple types of resources. The method may include receiving from a user a task configuration specifying one or more performance parameters for a proposed task, automatically determining for each type of resource a quantity of that resource corresponding to the performance parameters for the proposed task, automatically determining for each processor a quantity of each type of resource available to that processor, automatically comparing for processor (a) the quantity of each type of resource available to that processor with (b) the quantity of each type of resource corresponding to the performance parameters for the proposed task, automatically determining based on the comparisons whether any processor has capacity to perform the proposed task, and automatically determining whether to perform the proposed task based at least on whether any processor has capacity to perform the task. | 12-26-2013 |
20130346988 | PARALLEL DATA COMPUTING OPTIMIZATION - The use of statistics collected during the parallel distributed execution of the tasks of a job may be used to optimize the performance of the task or similar recurring tasks. An execution plan for a job is initially generated, in which the execution plan includes tasks. Statistics regarding operations performed in the tasks are collected while the tasks are executed via parallel distributed execution. Another execution plan is then generated for another recurring job, in which the additional execution plan has at least one task in common with the execution plan for the job. The additional execution plan is subsequently optimized based at least on the statistics to produce an optimized execution plan. | 12-26-2013 |
20130346989 | SYSTEMS AND METHODS FOR EVENT STREAM PROCESSING - Disclosed are systems and methods for processing events in an event stream using a map-update application. The events may be embodied as a key-attribute pair. An event is processed by one or more instances implementing either a map or an update function. A map function receives an input event from the event stream and publishes one or more events to the event stream. An update function receives an event and updates a corresponding slate and publishes zero or more events. Systems and methods are also disclosed herein for implementing a map-update application in a multithreaded architecture and for handling overloading of a particular thread or node. Systems and methods for providing access to slates updated according to update operations are also disclosed. | 12-26-2013 |
20130346990 | SYSTEMS AND METHODS FOR EVENT STREAM PROCESSING - Disclosed are systems and methods for processing events in an event stream using a map-update application. The events may be embodied as a key-attribute pair. An event is processed by one or more instances implementing either a map or an update function. A map function receives an input event from the event stream and publishes one or more events to the event stream. An update function receives an event and updates a corresponding slate and publishes zero or more events. Systems and methods are also disclosed herein for implementing a map-update application in a multithreaded architecture and for handling overloading of a particular thread or node. Systems and methods for providing access to slates updated according to update operations are also disclosed. | 12-26-2013 |
20130346991 | METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING APPARATUS - A method of controlling an information processing apparatus includes detecting a first application program of which an execution result is displayed, obtaining a parameter correlating to the first application program, and determining, by a processor, the number of cores to be run in a CPU on a basis of the parameter. | 12-26-2013 |
20130346992 | COMPUTING SYSTEM, METHOD FOR CONTROLLING THEREOF, AND COMPUTER-READABLE RECORDING MEDIUM HAVING COMPUTER PROGRAM FOR CONTROLLING THEREOF - A pointing object for constructing and managing pointing information, wherein the pointing information points one or more executable objects, each providing a unique output by performing a unique operation; an informative object for constructing and managing reference information which serves as a reference in using one or more executable objects to deal with a user's request; a procedural object for selecting one or more executable objects to be executed based on the reference information, and constructing and managing an execution sequence related to the execution order of the selected one or more executable objects; and an execution control object for executing at least a part of each of the selected executable object according to the execution sequence and providing output of the executable object resulting from the execution to a designated recipient selected from the user and at least one third party selected by analyzing the user's request. | 12-26-2013 |
20140007111 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR PREEMPTION OF THREADS AT A SYNCHRONIZATION BARRIER | 01-02-2014 |
20140007112 | SYSTEM AND METHOD FOR IDENTIFYING BUSINESS CRITICAL PROCESSES | 01-02-2014 |
20140007113 | METHOD AND APPARATUS FOR TASK BASED REMOTE SERVICES | 01-02-2014 |
20140007114 | MONITORING ACCESSES OF A THREAD TO MULTIPLE MEMORY CONTROLLERS AND SELECTING A THREAD PROCESSOR FOR THE THREAD BASED ON THE MONITORING | 01-02-2014 |
20140007115 | MULTI-MODAL BEHAVIOR AWARENESS FOR HUMAN NATURAL COMMAND CONTROL | 01-02-2014 |
20140007116 | IMPLEMENTING FUNCTIONAL KERNELS USING COMPILED CODE MODULES | 01-02-2014 |
20140007117 | METHODS AND APPARATUS FOR MODIFYING SOFTWARE APPLICATIONS | 01-02-2014 |
20140007118 | COMPARISON DEVICE, COMPARISON METHOD, NON-TRANSITORY RECORDING MEDIUM, AND SYSTEM | 01-02-2014 |
20140007119 | Dynamically Adjusting a Log Level of a Transaction | 01-02-2014 |
20140007120 | METHOD FOR OPERATING A MICROPROCESSOR UNIT, IN PARTICULAR IN A MOBILE TERMINAL | 01-02-2014 |
20140013329 | THREAD FOLDING TOOL - A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display. | 01-09-2014 |
20140019980 | Thread Scheduling and Control Framework - Embodiments relate to systems and methods for thread control and scheduling. According to a particular embodiment, a daemon framework provides a uniform approach for scheduling and execution of inter-related processes. The daemon framework may comprise a main daemon configured to manage lifecycle, to manage status, and to control child daemon(s) responsible for functions such as scanning of folders and Persistent Staging Areas (PSAs) for delivery of new data threads. Embodiments may allow visualization of process status, as well as controlling each of these processes. Embodiments may provide for programmatical and/or manual intervention, including error correction. Particular embodiments may have self-correction capability in the case of external or internal errors. | 01-16-2014 |
20140019981 | SCHEDULING USER JOBS ACROSS TENANTS - Jobs are scheduled per user across tenants in a multi-tenant environment. When a tenant is serviced, scheduling information is stored that indicates a servicing time of a user that remains unserviced after servicing the tenant. Each time a scheduler begins to schedule the servicing of tenants it obtains the list of the tenants. The scheduler sorts the tenants using the servicing information obtained from each of the tenants. The tenant that has the oldest unserviced user is serviced first. The scheduler starts servicing the first tenant in the sorted list and services as many jobs for as many users for that tenant based on the available processing resources. When the limit of servicing users for the tenant is reached and one or more users remain unserviced, the time for an unserviced user is stored in the servicing information. | 01-16-2014 |
20140019982 | CORE-AFFINE PROCESSING ON SYMMETRIC MULTIPROCESSING SYSTEMS - Embodiments of a symmetric multi-processing (SMP) system can provide full affinity of a connection to a core processor when desired, even when ingress packet distribution, protocol processing layer and applications may autonomously process packets on different cores of the SMP system. In an illustrative embodiment, the SMP system can include a server application that is configured to create a plurality of tasks and bind the plurality of tasks to a plurality of core processors. One or more of the plurality of tasks are configured to create a corresponding listening endpoint socket, bind and listen on a protocol address that is common to the plurality of tasks. | 01-16-2014 |
20140019983 | INVERSION OF CONTROL FOR EXECUTABLE EXTENSIONS - A system method and non-transitory computer readable medium implemented as programming on a suitable computing device, the system for inversion of control of executable extensions including a run-time environment configured to push data to one or a plurality of extensions, wherein said one or plurality of extensions are configured to comprise one or a plurality of signatures. Wherein said one or a plurality of extensions are compilable, designable and testable outside of the run-time environment, and wherein the run-time environment may be configured to accept an extension and to push data to that extension as per said one or a plurality of signatures. | 01-16-2014 |
20140019984 | FEEDBACK-DRIVEN TUNING FOR EFFICIENT PARALLEL EXECUTION - A parallel execution manager may determine a parallel execution platform configured to execute tasks in parallel using a plurality of available processing threads. The parallel execution manager may include a thread count manager configured to select, from the plurality of available processing threads and for a fixed task size, a selected thread count, and a task size manager configured to select, from a plurality of available task sizes and using the selected thread count, a selected task size. The parallel execution manager may further include an optimizer configured to execute an iterative loop in which the selected task size is used as an updated fixed task size to obtain an updated selected thread count, and the updated selected thread count is used to obtain an updated selected task size. Accordingly, a current thread count and current task size for executing the tasks in parallel may be determined. | 01-16-2014 |
20140019985 | Parallel Tracing for Performance and Detail - A parallel tracer may perform detailed or heavily instrumented analysis of an application in parallel with a performance or lightly instrumented version of the application. Both versions of the application may operate on the same input stream, but with the heavily instrumented version having different performance results than the lightly instrumented version. The tracing results may be used for various analyses, including optimization and debugging. | 01-16-2014 |
20140019986 | MANAGING MULTI-THREADED OPERATIONS IN A MULTIMEDIA AUTHORING ENVIRONMENT - Managing multi-threaded computer processing, including: processing a main thread for an object in background of the multi-threaded computer processing without locking the object during its process in the background, wherein processing a main thread includes: monitoring the state of the object, wherein the object is deemed ready for processing after it satisfies a set of rules to check for its completeness, and the object has not been modified for a pre-determined period of time; creating and adding tasks to a queue for processing once the object is ready; and packaging required information for the tasks into a single data structure that is passed to a task thread and returned to the main thread upon completion. | 01-16-2014 |
20140026137 | PERFORMING SCHEDULING OPERATIONS FOR GRAPHICS HARDWARE - A computing device for performing scheduling operations for graphics hardware is described herein. The computing device includes a central processing unit (CPU) that is configured to execute an application. The computing device also includes a graphics scheduler configured to operate independently of the CPU. The graphics scheduler is configured to receive work queues relating to workloads from the application that are to execute on the CPU and perform scheduling operations for any of a number of graphics engines based on the work queues. | 01-23-2014 |
20140026138 | INFORMATION PROCESSING DEVICE AND BARRIER SYNCHRONIZATION METHOD - An information processing device includes a plurality of barrier banks, and one or more processors including at least one of the plurality of barrier banks. Each of barrier banks includes one or more hardware threads and a barrier synchronization mechanism. The barrier synchronization mechanism includes a bottom unit having a barrier state, and a bitmap indicating that each of the one or more hardware threads has arrived at a synchronization point, and a top unit having a non-arrival counter indicating the number of barrier banks yet to be synchronized. The bottom unit notifies of bottom unit synchronization completion when all the one or more hardware threads have arrived at a barrier synchronization point. The non-arrival counter decrements its value by 1 upon receipt of the bottom unit synchronization completion, and the top unit sets the barrier state to a value indicating synchronization completion when the non-arrival counter decrements to 0. | 01-23-2014 |
20140026139 | INFORMATION PROCESSING APPARATUS AND ANALYSIS METHOD - A determination unit determines which one of a first and a second method has shorter response time when the first method is to analyze, in real time, all of information items designated as analysis targets in an analysis request and the second method is to analyze, in non-real time, some of the information items and analyze the remaining information items in real time. If the first method has shorter response time, an analysis management unit causes a first process unit to analyze all the information items in real time. If the second method has shorter response time, the analysis management unit causes the first process unit to analyze, in real time, information items other than the information items to be analyzed in non-real time, and causes a second process unit to analyze, in non-real time, the information items to be analyzed in non-real time. | 01-23-2014 |
20140033211 | LAUNCHING WORKFLOW PROCESSES BASED ON ANNOTATIONS IN A DOCUMENT - Methods and apparatus, including computer program products, implementing and using techniques for launching a process based on annotations made to a document in an enterprise content management system. It is determined whether an annotation has been added to a document in the enterprise content management system, wherein the annotation is stored as a separate element and the separate element is associated with the document. It is determined whether the annotation is of a type indicating that a subsequent workflow process is to be performed. In response to determining that the annotation is of a type indicating that a subsequent workflow process is to be performed, the annotation is parsed to obtain information to be used in the subsequent workflow process. The subsequent workflow process is launched. The launch uses at least some of the information obtained from parsing the annotation as parameters in the subsequent workflow process. | 01-30-2014 |
20140033212 | Multi-Tenant Queue Controller - Novel tools and techniques for controlling workloads in a multi-tenant environment. Some such tools provide a queue controller that can control workflow processing across systems, work (provisioning engines, computing clusters, and/or physical data centers. In an aspect, a queue controller can determine the status of each work request based on one or more attributes, such as the workflow type, the systems affected by (and/or involved with) the workflow, information about the tenant requesting the workflow, the job type, and/or the like. In another aspect, a queue controller can be policy-based, such that policies can be configured for one or more of these attributes, and the attribute(s) of an individual request can be analyzed against one or more applicable policies to determine the status of the request. Based on this status, the requested work can be scheduled. | 01-30-2014 |
20140033213 | SYSTEM AND METHOD FOR MEMORY MANAGEMENT - A system and method for automatic memory management of a shared memory during parallel processing of a web application. The system includes a computing system configured to allow parallel computing of a web application executed within a web browser. The computing system includes shared memory having a set of blocks distributed at least a first thread and at least one spawned thread of a processing function of the web application. The memory is partitioned into a nursery heap, a mature heap and a database having a plurality of private nurseries, wherein the first thread has access to the nursery heap and mature heap and the at least one spawned thread has access to an associated one of the plurality of private nurseries. During parallel computing of the web application, management of the shared memory includes garbage collection of at least each of the plurality of private nurseries. | 01-30-2014 |
20140033214 | MANAGING ARRAY COMPUTATIONS DURING PROGRAMMATIC RUN-TIME IN A DISTRIBUTED COMPUTING ENVIRONMENT - A plurality of array partitions are defined for use by a set of tasks of the program run-time. The array partitions can be determined from one or more arrays that are utilized by the program at run-time. Each of the plurality of computing devices are assigned to perform one or more tasks in the set of tasks. By assigning each of the plurality of computing devices to perform one or more tasks, an objective to reduce data transfer amongst the plurality of computing devices can be implemented. | 01-30-2014 |
20140033215 | SCHEDULING METHOD AND SCHEDULING SYSTEM - A scheduling method that is executed by a first device includes acquiring in response to a process request received by the first device, any one among a device count of peripheral devices near the first device and a device count of the peripheral devices near the first device, including the first device; and determining, by a CPU of the first device, a scheduling method for scheduling a process corresponding to the process request, based on the device count. | 01-30-2014 |
20140033216 | TASK PROCESSING METHOD AND DEVICE - The present invention relates to the field of computer technologies, and disclosed are a task processing method and an associated mobile terminal for performing the method. The method includes: scanning an application program, so as to obtain a list of predefined tasks corresponding to the application program; comparing the list of predefined tasks with a preset white list of tasks; removing a matched task from the list of predefined tasks, so as to obtain a new task list, so that a user selects a task according to a need from the new task list for execution; detecting one or more user selections of members of the new task list; updating the new task list and the preset white list of tasks according to the user selections; and performing the updated new task list using the application program. | 01-30-2014 |
20140033217 | MULTI-CORE PROCESSORS - A method of operating a multi-core processor. In one embodiment, each processor core is provided with its own private cache and the device comprises or has access to a common memory, and the method comprises executing a processing thread on a selected first processor core, and implementing a normal access mode for executing an operation within a processing thread and comprising allocating sole responsibility for writing data to given blocks of said common memory, to respective processor cores. The method further comprises implementing a speculative execution mode switchable to override said normal access mode. This speculative execution mode comprises, upon identification of said operation within said processing thread, transferring responsibility for performing said operation to a plurality of second processor cores, and optionally performing said operation on the first processor core as well. | 01-30-2014 |
20140040899 | SYSTEMS AND METHODS FOR DISTRIBUTING A WORKLOAD IN A DATA CENTER - A data center workload distribution management system includes a cooling cost engine to determine a cooling cost or cooling capacity for each of a plurality of zones of a data center and a workload distribution engine. The workload distribution engine is to identify the zone that has a lowest cooling cost and sufficient cooling capacity and also has sufficient processing capacity for a workload, determine a local cooling efficiency index for at least one location within the identified zone, and distribute the workload to the location having a local cooling efficiency index that indicates the highest cooling efficiency. | 02-06-2014 |
20140040900 | STORAGE MANAGING DEVICE AND METHOD AND ELECTRONIC APPARATUS - A storage managing device and method and an electronic apparatus are provided. The storage managing device is applied to a storage device composed of a plurality of storage blocks, comprising: a thread collecting unit configured to collect threads to be executed in a predetermined time; a thread dividing unit configured to divide the collected threads into n thread groups based on a predetermined strategy; a thread holding unit configured to designate one or more storage blocks to each thread group to store the data necessary for the execution of each thread group into the one or more storage blocks; a thread executing unit configured to execute the threads; and a power consumption setting unit configured to set the one or more storage blocks designated to the threads being executed to an active status, while set other storage blocks to a low power consumption status. With the storage managing device and method and the electronic apparatus according to the embodiments of this invention, the consumption and temperature of the storage device can be lowered significantly while maintaining the capability of the storage device. | 02-06-2014 |
20140040901 | INTER-THREAD DATA COMMUNICATIONS IN A COMPUTER PROCESSOR - A first set of one or more hardware threads for receiving messages sent from hardware threads are registered. After receiving indications of a message location value and a number, the message location value is increments and sent to a different hardware thread of the first set of one or more hardware threads until the message location value has been incremented the number of times or a criterion for interrupting the incrementing and sending is satisfied. An actual number of times the message location value was incremented is indicated to a hardware thread that sent the indications of the message location value and the number. | 02-06-2014 |
20140040902 | Process Instance Serialization - Method and system for serializing access to datasets, suitable for use in a workflow management system which executes multiple business processes, wherein a single process instance is enabled to invoke web services which may update datasets of different storages holding redundant information. Business Process Execution Language for Web Services allows defining business processes that make use of web services and business processes that externalize their functionality as web services. As the business process has no knowledge about data that is accessed by invoked web services, concurrent process instances may update the same pieces of information within a database. Unless access to the data is carried out as a transaction, parallel execution of the process instances may cause data inconsistencies, which may be avoided by serializing the execution of process instances based on correlation information associated with messages consumed by the process instances. | 02-06-2014 |
20140047447 | WORK SCHEDULING METHOD AND SYSTEM IMPLEMENTED VIA CLOUD PLATFORM - A work scheduling method implemented via a cloud platform is provided. The work scheduling method is used in a cloud platform work schedule system. The method includes: arranging, by a developing interface of a developing module, a work schedule; generating, by the developing module, a dynamic linking library (DLL) which corresponds to the work schedule and uploading the dynamic linking library to the cloud platform through the internet; transferring, by a disposing module, the dynamic linking library to a Application service; computing, by a scheduling module, a scheduling time according to the work schedule; and executing, by an executing module, the application service according to the scheduling time. | 02-13-2014 |
20140053161 | Method for Adaptive Scheduling of Multimedia Jobs - Systems and methods describe herein provide a method of for managing task scheduling on a accelerated processing device. Duration characteristics for a plurality of offset values are determined based on execution of first and second processing tasks within an accelerated processing device. An offset value from the plurality of offset values is selected indicating a difference in an execution start time between the first processing task and the second processing task. Additional executions of the first and second processing tasks are scheduled based on the selected offset value. | 02-20-2014 |
20140059551 | DATA STORAGE I/O COMMUNICATION METHOD AND APPARATUS - A method of scheduling requests from various services to a data storage resource, includes receiving service requests, the service requests including metadata specifying a service ID and a data size of payload data associated with the request, at least some of the service IDs having service throughput metadata specifying a required service throughput associated therewith; arranging the requests into FIFO throttled queues based on the service ID; setting a deadline for processing of a request in a throttled queue, the deadline selected in dependence upon the size of the request and the required service throughput associated therewith; providing a time credit value for each throttled queue, the time credit value including an accumulated value of the time by which a deadline for that queue has been missed; comparing the time credit value of a throttled queue to the time required to service the next request in that throttled queue. | 02-27-2014 |
20140059552 | TRANSPARENT EFFICIENCY FOR IN-MEMORY EXECUTION OF MAP REDUCE JOB SEQUENCES - Executing a map reduce sequence may comprise executing all jobs in the sequence by a collection of a plurality of processes with each process running zero or more mappers, combiners, partitioners and reducers for each job, and transparently sharing heap state between the jobs to improve metrics associated with the job. Processes may communicate among themselves to coordinate completion of map, shuffle and reduce phases, and completion of said all jobs in the sequence. | 02-27-2014 |
20140059553 | HARDWARE ASSISTED REAL-TIME SCHEDULER USING MEMORY MONITORING - Apparatus and method for real-time scheduling. An apparatus includes first and second processing elements and a memory. The second processing element is configured to generate or modify a schedule of one or more tasks, thereby creating a new task schedule, and to write to a specified location in the memory to indicate that the new schedule has been created. The first processing element is configured to monitor for a write to the specified location in the memory and execute one or more tasks in accordance with the new schedule in response to detecting the write to the specified location. The first processing element may be configured to begin executing tasks based on detecting the write without invoking an interrupt service routine. The second processing element may store the new schedule in the memory. | 02-27-2014 |
20140059554 | PROCESS GROUPING FOR IMPROVED CACHE AND MEMORY AFFINITY - A computer program product for process allocation is configured to determine a set of two or more processes of a plurality of processes that share at least one resource in a multi-node system, wherein each of the set of two or more processes is running on different nodes of the multi-node system. The program code can be configured to calculate a value based on a weight of the resource and frequency of access of the resource by each process. The program code can be configured to determine a pair of processes of the set of processes having a greatest sum of calculated values by resource. The program code can be configured to allocate a first process of the pair of processes from a first node in the multi-node system to a second node in the multi-node system that hosts a second process of the pair of processes. | 02-27-2014 |
20140059555 | PROCESSING EXECUTION REQUESTS WITHIN DIFFERENT COMPUTING ENVIRONMENTS - A computerized method, computer system, and computer program product for processing an execution request within different computing environments. Execution requests and generated reference information are forwarded to the different computing environments, where the requests are executing using the reference information. Results of the processed execution requests are collected from the different computing environments. The results are compared to identify whether a discrepancy exists giving indication of a software or hardware error. | 02-27-2014 |
20140059556 | ENVIRONMENT BASED NODE SELECTION FOR WORK SCHEDULING IN A PARALLEL COMPUTING SYSTEM - A method, apparatus, and program product manage scheduling of a plurality of jobs in a parallel computing system of the type that includes a plurality of computing nodes and is disposed in a data center. The plurality of jobs are scheduled for execution on a group of computing nodes from the plurality of computing nodes based on the physical locations of the plurality of computing nodes in the data center. The group of computing nodes is further selected so as to distribute at least one of a heat load and an energy load within the data center. The plurality of jobs may be additionally scheduled based upon an estimated processing requirement for each job of the plurality of jobs. | 02-27-2014 |
20140068620 | TASK EXECUTION & MANAGEMENT IN A CLUSTERED COMPUTING ENVIRONMENT - Machines, systems and methods for task management in a computer implemented system. The method comprises registering a task with brokers residing on one or more nodes to manage the execution of a task to completion, wherein a first broker is accompanied by a first set of worker threads co-located on the node on which the first broker is executed, wherein the first broker assigns responsibility of execution for the task to the one or more worker threads in the first set of co-located worker threads, wherein in response to a failure associated with a first worker thread in the first set, the first broker reassigns the responsibility of execution for the task to a second worker thread in the first set, wherein in response to a failure associated with the first broker, a second broker assigns responsibility of execution for the task to one or more co-located worker threads. | 03-06-2014 |
20140068621 | DYNAMIC STORAGE-AWARE JOB SCHEDULING - Computer-implemented techniques for executing jobs on parallel processors using dynamic storage-aware job scheduling are disclosed. A network storage system is accessed along with a scheduling queue of pending job processes. The networked storage system is polled to determine the status of members of the storage system. These members comprise storage devices and storage shares. A database is created of metrics describing the status of the members of the networked storage system. Job processes are then dispatched to the networked storage system based on this database of metrics. | 03-06-2014 |
20140068622 | PACKET PROCESSING ON A MULTI-CORE PROCESSOR - A method for packet processing on a multi-core processor. According to one embodiment of the invention, a first set of one or more processing cores are configured to include the capability to process packets belonging to a first set of one or more packet types, and a second set of one or more processing cores are configured to include the capability to process packets belonging to a second set of one or more packet types, where the second set of packet types is a subset of the first set of packet types. Packets belonging to the first set of packet types are processed at a processing core of either the first or second set of processing cores. Packets belonging to the second set of packet types are processed at a processing core of the first set of processing cores. | 03-06-2014 |
20140075443 | FRAMEWORK FOR CRITICAL-PATH RESOURCE-OPTIMIZED PARALLEL PROCESSING - The disclosure generally describes computer-implemented methods, computer-program products, and systems for critical path, resource-optimized, parallel processing. One computer-implemented method includes instantiating a resource consumption optimizer framework (RCOF) for a plurality of sub-process associated with a process, loading the plurality of sub-processes into a memory in accordance with a calculated optimized resource consumption pattern, associating each sub-process of the plurality of sub-processes with an agent, wherein the agent communicates with the RCOF, executing a particular sub-process of the plurality of sub-processes loaded into the memory, wherein the sub-process execution start is gated by an associated agent based upon at least a determined buffer value, and notifying the RCOF of the particular sub-process execution completion. | 03-13-2014 |
20140075444 | Multiple Cell Dequeue for High Speed Queueing - A system includes a task scheduler to select a queue from a port. The port includes a determined number of cell slots between pick opportunities. The task scheduler selects a queue at a pick opportunity. A queue manager connects with the task scheduler to pop cell packets from the selected queue, and to send update information to the task scheduler. The update information includes information of how the queue manager expects to fill the cell slots between the task scheduler selections. The task scheduler makes subsequent queue selections based on the update information. | 03-13-2014 |
20140082623 | SYSTEM AND METHOD OF OVERRIDING A SCHEDULED TASK IN AN INTRUSION SYSTEM TO REDUCE FALSE ALARMS - Systems and methods of overriding a scheduled task in an intrusion system are provided. A method can include identifying a task scheduled to be executed at a scheduled time, identifying a recipient of an alert message for the task, identifying a transmission medium for the alert message for the task, identifying a predetermined period of time prior to the scheduled time, transmitting the alert message to the recipient via the transmission medium when the predetermined period of time prior to the scheduled time occurs, receiving a response message from the recipient, and, based on contents of the response message, executing the task at the scheduled time, canceling the task at the scheduled time, or rescheduling the task for a new scheduled time. The method can confirm receipt of a valid user password before executing, canceling, or rescheduling the task. | 03-20-2014 |
20140082624 | EXECUTION CONTROL METHOD AND MULTI-PROCESSOR SYSTEM - An execution control method is executed by a first processor of a multi-processor system controlled by plural operating systems (OSs). The execution control method includes determining by referring to first information that is stored in a storage unit and identifies a synchronization process for threads executed by OSs that are different from one another and among the OSs, whether a synchronization process for which an execution request is issued by a thread that is executed by a first OS that is among the OSs and controls the first processor, is the synchronization process for threads executed by OSs that are different from one another; and upon determining so, causing the first OS to execute the synchronization process for which the execution request is issued, using a storage area accessible by the first OS and specific to the first processor. | 03-20-2014 |
20140089928 | METHOD OF SOA PERFORMANCE TUNING - Systems and methods of SOA performance tuning are provided. In accordance with an embodiment, one such method can comprise monitoring a plurality of processing stages, calculating a processing speed for each of the processing stages, and tuning a slowest processing stage of the plurality of processing stages. | 03-27-2014 |
20140089929 | DYNAMIC STREAM PROCESSING WITHIN AN OPERATOR GRAPH - A method and system for processing a stream of tuples in a stream-based application is disclosed. The method may include a first stream operator determining whether a requirement to modify processing of a first tuple at a second stream operator exists. The method may provide for associating an indication to modify processing of the first tuple at the second stream operator if the requirement exists. | 03-27-2014 |
20140089930 | HOST SYSTEM - A host system includes a plurality of cores and is designed such that one real-time process and one core-local timer is run on each of the plurality of cores. | 03-27-2014 |
20140096137 | Processor Having Per Core and Package Level P0 Determination Functionality - A processor is described that includes a processing core and a plurality of counters for the processing core. The plurality of counters are to count a first value and a second value for each of multiple threads supported by the processing core. The first value reflects a number of cycles at which a non sleep state has been requested for the first value's corresponding thread, and, a second value that reflects a number of cycles at which a non sleep state and a highest performance state has been requested for the second value's corresponding thread. The first value's corresponding thread and the second value's corresponding thread being a same thread. | 04-03-2014 |
20140096138 | System and Method For Large-Scale Data Processing Using an Application-Independent Framework - A large-scale data processing system and method for processing data in a distributed and parallel processing environment is disclosed. The system comprises a set of interconnected computing systems, each having one or more processors and memory. The set of interconnected computing systems include: a set of application-independent map modules for reading portions of input files containing data, and for producing intermediate data values by applying at least one user-specified, application-specific map operation to the data; a set of intermediate data structures distributed among a plurality of the interconnected computing systems for storing the intermediate data values; and a set of application-independent reduce modules, distinct from the plurality of application-independent map modules, for producing final output data by applying at least one user-specified, application-specific reduce operation to the intermediate data values. | 04-03-2014 |
20140101661 | METHOD AND APPARATUS FOR TIME MANAGEMENT AND SCHEDULING FOR SYCHRONOUS PROCESSING ON A CLUSTER OF PROCESSING NODES - Certain aspects of the present disclosure provide techniques for time management and scheduling of synchronous neural processing on a cluster of processing nodes. A slip (or offset) may be introduced between processing nodes of a distributed processing system formed by a plurality of interconnected processing nodes, to enable faster nodes to continue processing without waiting for slower nodes to catch up. In certain aspects, a processing node, after completing each processing step, may check for received completion packets and apply a defined constraint to determine whether it may start processing a subsequent step or not. | 04-10-2014 |
20140101662 | EFFICIENT LOCK HAND-OFF IN A SYMMETRIC MULTIPROCESSOR SYSTEM - Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor. | 04-10-2014 |
20140109095 | Seamless extension of local computing power - Machines, systems and methods for remotely provisioning computing power over a communications network are provided. The method may comprise selecting one or more tasks being executed on a first computing system to be migrated for execution on a second computing system connected to the first computing system by way of a communications network; determining a first point of execution reached during the execution of at least a selected task on the first computing system prior to the selected task being migrated for execution to the second computing system; migrating the selected task to the second computing system, wherein the second computing system continues to execute the selected task from the first point of execution; and monitoring the connection between the first computing system and the second computing system so that in response to detecting a disconnection, execution of the selected task continues seamlessly. | 04-17-2014 |
20140109096 | Time Monitoring in a Processing Element and Use - System and method for controlling thread execution via time monitoring circuitry in a processing element. Execution of a thread may be suspended via a thread suspend/resume logic block included in the processing element in response to a received suspend thread instruction. An indication of a wakeup time may be received to a time monitoring circuit (TMC) included in the processing element. Time may be monitored via the TMC using a clock included in the processing element, until the wakeup time obtains. The thread suspend/resume logic block included in the processing element may be invoked by the TMC in response to the wakeup time obtaining, thereby resuming execution of the thread | 04-17-2014 |
20140109097 | Automated Technique to Configure and Provision Components of a Converged Infrastructure - A technique to provision a converted infrastructure (CI) includes generating task definitions to configure respective ones of compute, storage, and network components of a converged infrastructure (CI) when invoked. Each task definition includes a task identifier (ID), one or more component configuration commands, and one or more task arguments through which one or more corresponding component configuration parameters are passed to corresponding ones of the one or more component commands. The technique further includes automatically invoking each of the task definitions by task ID according to an ordered sequence in order to configure the CI. The automatically invoking includes providing the one or more component configuration commands and the corresponding one or more passed configuration parameters of each invoked task definition to the respective ones of the CI components. | 04-17-2014 |
20140109098 | MULTI-THREAD PROCESSOR - The scheduler performs thread scheduling of repeating processings of specifying each hardware thread included in a first group among the multiple hardware threads for the number of times set up in advance for the hardware thread, and of specifying any one of the hardware threads in a second group for the number of times set up in advance for the second group that includes other hardware threads. Moreover, when the hardware thread in the first group specified by the thread scheduling is nondispatchable, the scheduler performs rescheduling of respecifying the hardware thread in the second group instead of the hardware thread in the first group. | 04-17-2014 |
20140109099 | CODE COVERAGE FRAMEWORK - A computer program product records an execution of a program instruction. A determination is made that a thread has entered a program unit. Another determination is made that that the thread is associated with at least one attribute that matches a set of thread recording criteria. An instruction recording mechanism for the thread is dynamically activated in response to the at least one attribute of the thread matching the set of thread recording criteria. | 04-17-2014 |
20140109100 | SCHEDULING METHOD AND SYSTEM - A scheduling method that is executed by a first CPU includes determining whether a task belongs to a first task category; determining whether a first access area accessed by the task is located in a first memory or a second memory, when the task belongs to the first task category; and setting a memory accessed by the task to the first memory or the second memory, based on a result at the determining. | 04-17-2014 |
20140109101 | EFFECTIVE SCHEDULING OF PRODUCER-CONSUMER PROCESSES IN A MULTI-PROCESSOR SYSTEM - A novel technique for improving throughput in a multi-core system in which data is processed according to a producer-consumer relationship by eliminating latencies caused by compulsory cache misses. The producer and consumer entities run as multiple slices of execution. Each such slice has an associated execution context that comprises of the code and data that particular slice would access. The execution contexts of the producer and consumer slices are small enough to fit in the processor caches simultaneously. When a producer entity scheduled on a first core completed production of data elements as constrained by the size of cache memories, a consumer entity is scheduled on that same core to consume the produced data elements. Meanwhile, a second slice of the producer entity is moved to another core and a second slice of a consumer entity is scheduled to consume elements produced by the second slice of the producer. | 04-17-2014 |
20140115591 | APPARATUS, SYSTEM AND METHOD FOR PROVIDING FAIRNESS IN TASK SERVICING - A storage system that is configured to fairly service requests from different host systems particularly in congested situations. To balance the processing of tasks between different clients, the system sorts tasks received from different clients into task lists. In particular, the system sorts the incoming tasks based on the ITL (Initiator, Target, LU) nexus information associated with each task. In some instances, a new task list is created for each ITL nexus. The sorting of tasks may provide for a more even distribution of tasks and thus a more fair processing of tasks. More specifically, because tasks from each list are processed in round-robin fashion, tasks arriving from even the slowest clients are given a substantially equal chance of being processed as the tasks arriving from the faster clients. | 04-24-2014 |
20140115592 | METHOD FOR ESTIMATING JOB RUN TIME - A process controller adapted to provide an estimated prediction of a processing time for a data processing job to be run on one or more of a plurality of data processing devices that operate within a distributed processing system having a range of platforms, the process controller being in communication with a job prediction engine adapted to calculate an estimated processing time associated with the data processing job, wherein the process controller uses the estimated processing time to determine the estimated prediction and is further adapted to control the assignment of the data processing job to the data processing devices upon acceptance of the estimated prediction by a user. | 04-24-2014 |
20140115593 | AFFINITY OF VIRTUAL PROCESSOR DISPATCHING - In an embodiment, a request is received for a first partition to execute on a first virtual processor. If the first physical processor is available at a first node, the first virtual processor is dispatched to execute at the first physical processor at the first node that is the home node of the first virtual processor. If the first physical processor is not available, a determination is made whether the first physical processor is assigned to a second virtual processor and a home node of the second virtual processor is not the first node. If the first physical processor is assigned to a second virtual processor and the home node of the second virtual processor is not the first node, execution of the second virtual processor is stopped on the first physical processor and the first virtual processor is dispatched to the first physical processor. | 04-24-2014 |
20140115594 | MECHANISM TO SCHEDULE THREADS ON OS-SEQUESTERED SEQUENCERS WITHOUT OPERATING SYSTEM INTERVENTION - Method, apparatus and system embodiments to schedule OS-independent “shreds” without intervention of an operating system. For at least one embodiment, the shred is scheduled for execution by a scheduler routine rather than the operating system. A scheduler routine may run on each enabled sequencer. The schedulers may retrieve shred descriptors from a queue system. The sequencer associated with the scheduler may then execute the shred described by the descriptor. Other embodiments are also described and claimed. | 04-24-2014 |
20140123145 | EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS - A technique for simultaneously executing multiple tasks, each having an independent virtual address space, involves assigning an address space identifier (ASID) to each task and constructing each virtual memory access request to include both a virtual address and the ASID. During virtual to physical address translation, the ASID selects a corresponding page table, which includes virtual to physical address mappings for the ASID and associated task. Entries for a translation look-aside buffer (TLB) include both the virtual address and ASID to complete each mapping to a physical address. Deep scheduling of tasks sharing a virtual address space may be implemented to improve cache affinity for both TLB and data caches. | 05-01-2014 |
20140123146 | EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS - A technique for simultaneously executing multiple tasks, each having an independent virtual address space, involves assigning an address space identifier (ASID) to each task and constructing each virtual memory access request to include both a virtual address and the ASID. During virtual to physical address translation, the ASID selects a corresponding page table, which includes virtual to physical address mappings for the ASID and associated task. Entries for a translation look-aside buffer (TLB) include both the virtual address and ASID to complete each mapping to a physical address. Deep scheduling of tasks sharing a virtual address space may be implemented to improve cache affinity for both TLB and data caches. | 05-01-2014 |
20140123147 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR PARALLEL RECONSTRUCTION OF A SAMPLED SUFFIX ARRAY - A system, method, and computer program product are provided for reconstructing a sampled suffix array. The sampled suffix array is reconstructed by, for each index of a sampled suffix array for a string, calculating a block value corresponding to the index based on an FM-index, and reconstructing the sampled suffix array corresponding to the string based on the block values. Calculating at least two block values for at least two corresponding indices of the sampled suffix array is performed in parallel. | 05-01-2014 |
20140123148 | STREAM DATA PROCESSOR - Techniques are provided aimed at improving the flexibility and reducing the area and power consumption of digital baseband integrated circuits by using stream data processor based modem architecture. Semiconductor companies offering baseband ICs for handsets, face the challenges of improving die size efficiency, power efficiency, performance, time to market, and coping with evolving standards. Software defined radio based implementations offer a fast time to market. Dedicated hardware designs give the best die size and power efficiency. To combine the advantages of dedicated hardware with the advantages of conventional software defined radio solutions the stream data processor is partitioned into a stream processor unit, which implements processing functions in dedicated hardware and is hence die size and power efficient, and a flexible stream control unit which may be software defined to minimise the time to market of the product. | 05-01-2014 |
20140123149 | SERVER - CLIENT NEGOTIATIONS IN A MULTI-VERSION MESSAGING ENVIRONMENT - Disclosed is a method for selecting one of a plurality of versions of a software component of a message queuing software product to perform a task. One or more rules describing one or more characteristics of the plurality of versions of the software component is provided. Responsive to a determination that the rule applies to the task to be performed: a list of the plurality of versions of the message queuing software product is obtained, it is checked whether the software component of the one of the plurality of versions of the message queuing software product is available for use; and the most preferred version of the message queuing software component available is used to perform the task. Responsive to a determination that none of the rules apply to the task to be performed, the task is performed with the most preferred version of the message queuing software component. | 05-01-2014 |
20140130050 | MAIN PROCESSOR SUPPORT OF TASKS PERFORMED IN MEMORY - According to one embodiment of the present invention, a method for operating a computer system including a main processor, a processing element and memory is provided. The method includes receiving, at the processing element, a task from the main processor, performing, by the processing element, an instruction specified by the task, determining, by the processing element, that a function is to be executed on the main processor, the function being part of the task, sending, by the processing element, a request to the main processor for execution, the request comprising execution of the function and receiving, at the processing element, an indication that the main processor has completed execution of the function specified by the request. | 05-08-2014 |
20140130051 | MAIN PROCESSOR SUPPORT OF TASKS PERFORMED IN MEMORY - According to one embodiment of the present invention, a computer system for executing a task includes a main processor, a processing element and memory. The computer system is configured to perform a method including receiving, at the processing element, the task from the main processor, performing, by the processing element, an instruction specified by the task, determining, by the processing element, that a function is to be executed on the main processor, the function being part of the task, sending, by the processing element, a request to the main processor for execution, the request including execution of the function and receiving, at the processing element, an indication that the main processor has completed execution of the function specified by the request. | 05-08-2014 |
20140130052 | SYSTEM AND METHOD FOR COMPILING OR RUNTIME EXECUTING A FORK-JOIN DATA PARALLEL PROGRAM WITH FUNCTION CALLS ON A SINGLE-INSTRUCTION-MULTIPLE-THREAD PROCESSOR - A system and method for compiling or runtime executing a fork-join data parallel program with function calls. In one embodiment, the system includes: (1) a partitioner operable to partition groups into a master group and at least one worker group and (2) a thread designator associated with the partitioner and operable to designate only one thread from the master group for execution and all threads in the at least one worker group for execution. | 05-08-2014 |
20140130053 | DATA PROCESSING METHOD, APPARATUS AND MOBILE TERMINAL - The present disclosure discloses a data processing method, apparatus and mobile terminal. In the data processing method, the mobile terminal performs data computation in a sub-thread of the current program when a data request is received. The mobile terminal loads the requested data in the main thread of the current program based on the data computation results and displays the loaded requested data. The present disclosure ensures the smoothness of user interface threads, the stability of systems, and the display performance of user interfaces. | 05-08-2014 |
20140137121 | JOB MANAGEMENT SYSTEM AND JOB CONTROL METHOD - In the prior art, the number of jobs that can be executed in parallel in a parallel computer is restricted by the types of licenses capable of being used or the number of licenses, and if there is insufficiency in licenses, a new job cannot be executed until already entered jobs in execution are completed. In order to solve the problems of the prior art, the present invention is designed to release a resource of a job having a low priority when license is insufficient when a job is entered, and the released resource is allocated to a job having a high priority so as to enable the job having a high priority to be executed, according to which the efficiency of use of resources is enhanced. | 05-15-2014 |
20140137122 | MODIFIED BACKFILL SCHEDULER AND A METHOD EMPLOYING FREQUENCY CONTROL TO REDUCE PEAK CLUSTER POWER REQUIREMENTS - A method is disclosed for reducing peak power usage in a large computer system with multiple nodes by identifying jobs which can be scheduled to run at reduced frequency in order to reduce total power usage during certain time periods. The backfill scheduler of the computer system's operating system performs steps providing for selected jobs on selected nodes of the computer system to be run at reduced frequency such that those jobs are partially processed during previously underutilized holes in the computer system schedule in order to reduce overall peak power during a period of processing. | 05-15-2014 |
20140137123 | MICROCOMPUTER FOR LOW POWER EFFICIENT BASEBAND PROCESSING - A microcomputer for executing an application is described. The microcomputer comprises a heterogeneous coarse grained reconfigurable array comprising a plurality of functional units, optionally register files, and memories, and at least one processing unit supporting multiple threads of control. The at least one processing unit is adapted for allowing each thread of control to reconfigure at run-time the claiming of one or more particular types of the functional units to work for that thread depending on requirements of the application, e.g. workload, and/or the environment, e.g. current usage of FU's. This way, multithreading with dynamic allocation of CGA resources is implemented. Based on the demand of the application and the current utilization of the CGRA, different resource combinations can be claimed. | 05-15-2014 |
20140137124 | System and Method for Program and Resource Allocation Within a Data-Intensive Computer - A system and method for operating a data-intensive computer is provided. The data-intensive computer includes a processing sub-system formed by a plurality of processing node servers and a database sub-system formed by a plurality of database servers configured to form a collective database in excess of a petabyte of storage. The data-intensive computer also includes an operating system sub-system formed by a plurality of operating system servers that extend a unifying operating system environment across the processing sub-system, the database sub-system, and the operating system sub-system to act as components in a single data-intensive computer. The operating system sub-system is configured to coordinate execution of a single application as distributed processes having at least one of the distributed processes executed on the processing sub-system and at least one of the distributed processes executed on the database sub-system. | 05-15-2014 |
20140137125 | APPLICATION MIGRATION WITH DYNAMIC OPERATING SYSTEM CONTAINERS - Methods and systems of migrating applications ( | 05-15-2014 |
20140137126 | Technique for Task Sequence Execution - A technique for executing a task sequence on a computing system comprising a multiple task processor having an on-chip memory and further comprising an external memory connected to the multiple task processor is provided. A method implementation of the technique comprises transferring load module data from the external memory into the on-chip memory in order to generate a load module sequence within the on-chip memory, wherein the generation of a load module of the load module sequence comprises the following processes: determining which parts of the load module are currently stored within the on-chip memory, and transferring only load module data from the external memory into the on-chip memory for parts of the load module which are currently not stored within the on-chip memory, wherein each load module of the load module sequence is generated within an individual address range of the on-chip memory which is chosen in dependence on the load module sequence. The method implementation further comprises executing the task sequence by running the load module sequence. | 05-15-2014 |
20140137127 | Distributed Execution System and Distributed Program Execution Method - A distributed execution system includes an output-side pipe worker that operates on a node same as an output-side worker realized by a first distributed program, and an input-side pipe worker that operates on a node same as an input-side worker realized by a second distributed program, receives output data on the output-side worker from the output-side pipe worker, and transfers it to the input-side worker, in which the output-side pipe worker acquires, from the output-side worker, output data together with a sequence number indicating an order of the output data to be transmitted to the input-side worker, acquires a restore sequence number corresponding to an execution state of the input-side worker, compares the sequence number and the restore sequence number, and does not forward, to the input-side pipe worker, the output data acquired together with the sequence number indicating the order equal to or earlier than the restore sequence number. | 05-15-2014 |
20140143783 | THREAD CONSOLIDATION IN PROCESSOR CORES - According to one embodiment, a method for thread consolidation is provided for a system that includes an operating system and a multi-core processing chip in communication with an accelerator chip. The method includes running an application having software threads on the operating system, mapping the software threads to physical cores in the multi-core processing chip, identifying one or more idle hardware threads in the multi-core processing chip and identifying one or more idle accelerator units in the accelerator chip. The method also includes executing the software threads on the physical cores and the accelerator unit. The method also includes the controller module consolidating the software threads executing on the physical cores, resulting in one or more idle physical cores and a consolidated physical core. The method also includes the controller module activating a power savings mode for the one or more idle physical cores. | 05-22-2014 |
20140143784 | Controlling Remote Electronic Device with Wearable Electronic Device - In one embodiment, an apparatus includes a wearable computing device that includes one or more processors and a memory. The memory is coupled to the processors and includes instructions executable by the processors. When executing the instructions, the processors determine whether an application is running on the wearable computing device. The application controls one or more functions of a remote computing device. The processors determine to delegate a task associated with the application; delegate the task to be processed by a local computing device; and receive from the local computing device results from processing the delegated task. | 05-22-2014 |
20140149988 | METHOD FOR MANAGING THREADS AND ELECTRONIC DEVICE USING THE SAME METHOD - A method for managing threads and an electronic device using the method are provided. In the method, a current time is obtained. A time interval from now to a time for the processor to wake up next time is calculated. The processor is released until reaching the end of the time interval. When the end of the time interval is reached or a first notice signal of the processor is received, a first newest time is obtained to update a current time, and the current time is logged as a basis time. It is respectively checked whether the current time satisfies a plurality of predetermined time conditions of the registered threads against a plurality of registered threads in the threads. When the current time satisfies the predetermined time condition of a first registered thread among the registered threads, the first registered thread is waked up. | 05-29-2014 |
20140149989 | APPARATUS AND METHOD FOR EXTRACTING RESTRICTION CONDITION - A restriction condition extraction apparatus specifies operation targets including a first target and another target related thereto that are operated based on first procedure information in a first execution environment, extracts first procedures related to the specified operation targets from the first procedure information, generates first relation information indicating an execution order on related operation targets regarding the extracted first procedures, specifies operation targets including a second target and another target related thereto that are operated based on second procedure information in a second execution environment, extracts second procedures related to the specified operation targets from the second procedure information, generates second relation information indicating an execution order on related operation targets regarding the extracted second procedures, compares the first and the second relation information, and extracts relations of an execution order on related operation targets, which are common in the first and the second relation information. | 05-29-2014 |
20140157277 | REDUCING POWER GRID NOISE IN A PROCESSOR WHILE MINIMIZING PERFORMANCE LOSS - In the management of a processor, logical operation activity is monitored for increases from a low level to a high level during a sampling window across multiple cores sharing a common supply rail, with at least one decoupling capacitor along the common supply rail. Responsive to detecting the increase in logical operation activity from the low level to the high level during the sampling window, the processor limits the logical operations executed on the cores during a lower activity period to a level of logical operations set between the low level and a medium level, where the medium level is an amount between the low level and the high level. Responsive to the lower activity period ending, the processor gradually decreases the limit on the logical operations to resume normal operations. | 06-05-2014 |
20140157278 | THREAD FOLDING TOOL - A computer-implemented method of performing runtime analysis on and control of a multithreaded computer program. One embodiment of the present invention can include identifying threads of a computer program to be analyzed. Under control of a supervisor thread, a plurality of the identified threads can be folded together to be executed as a folded thread. The execution of the folded thread can be monitored to determine a status of the identified threads. An indicator corresponding to the determined status of the identified threads can be presented in a user interface that is presented on a display. | 06-05-2014 |
20140157279 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND CONTROL PROGRAM STORAGE MEDIUM - Provided is an information processing apparatus. The information processing apparatus includes thread control means for, at starting a plurality of threads, giving an identifier to the thread, and, at ending the each thread, notifying of an end along with the identifier; and data element control means for, in case that a deletion thread which deletes a data element from list-structured data being executed, maintaining a content of the deleted data element in an unmodifiable state until ends of all the threads which started before the deletion processing by the deletion thread being confirmed by a notification of the end along with the identifier, and, in case that the ends of all the threads which started before the deletion processing by the deletion thread being notified of along with the identifier, putting the deleted data element into a reusable state. | 06-05-2014 |
20140165067 | Task Concurrency Limiter - In an exemplary embodiment, a method includes intercepting a first task sent from a task scheduler of a first computing system to a second computing system. A number of active tasks initiated by the task scheduler that are being performed by the second computing system is determined. The sending of the first task to the second computing system is delayed in response to a determination that the number of active tasks being performed by the second computing system is greater than or equal to a predetermined task limit associated with the second computing system. The first task is sent to the second computing system in response to a determination that the number of active tasks being performed by the second computing system is less than the predetermined task limit associated with the second computing system. | 06-12-2014 |
20140165068 | SCHEDULING EVENT STREAMS DEPENDING ON CONTENT INFORMATION DATA - Apparatus and method for scheduling event streams. The apparatus includes (i) an interface for receiving event streams which are placed in queues and (ii) a scheduler which selects at least one event stream for dispatch depending on sketched content information data of the received event streams. The scheduler includes a sketching engine for sketching the received event streams to determine content information data and a selection engine for selecting at least one received event stream for dispatch depending on the determined content information data of the received event streams. The method includes the steps of (i) determining content information data about the content of event streams and (ii) selecting at least one event stream from the event streams for dispatch depending on the content information data. A computer program, when run by a computer, causes the computer to perform the steps of the above method. | 06-12-2014 |
20140165069 | SYSTEM AND APPARATUS HAVING AN APPLICATION PROGRAMMING INTERFACE FOR FLEXIBLE CONTROL OF EXECUTION ULTRASOUND ACTIONS - Apparatus to control and execute ultrasound system actions includes API that includes API procedure, processor coupled to API, adaptive scheduler, and memory. Adaptive scheduler includes beamer to generate signals, probe interface to transmit the signals to at least one probe unit and to receive signals from the at least one probe unit, and receiver to receive and process the signals received from the probe interface. Memory stores instructions, which when executed, causes processor to receive task list including task actions. Processor may execute API procedure to generate scan specification that is a data structure that includes task list. Processor may execute API procedure to identify at least one of: a probe required to perform the task actions, a beam required to perform the task actions and requirements and parameters associated with the beam, or a format of a beam firing result. Other embodiments are described. | 06-12-2014 |
20140173606 | STREAMING PROCESSING OF SHORT READ ALIGNMENT ALGORITHMS - A technique for executing alignment algorithms on a SIMT processing environment is disclosed. An alignment algorithm having multiple stages is executed within the SIMT environment such that a different thread group executes each stage of the algorithm. Each thread group performs a different set of alignment operations related to a different stage of alignment algorithm for a group of short reads. In such a manner, the thread groups operate in unison to perform all the operations related to each stage of the alignment algorithm on every short read in the group of short reads. | 06-19-2014 |
20140173607 | COMPUTING SYSTEM OPERATING ENVIRONMENTS - Techniques for optimizing an operation environment include receiving, from a first computing system, an optimization task at a second computing system; processing the optimization task in an initial optimization environment to obtain one or more initial optimization results; for each of the one or more initial optimization results, generating an optimization data record that comprises the optimization task, the initial optimization environment, and the initial optimization result; for each of the optimization data records: varying one or more parameters of the initial optimization environment to generate an updated optimization environment; processing the optimization task in the updated optimization environment to obtain an updated optimization result; storing the initial optimization results and updated optimization results in a repository that is part of or communicably coupled to the second computing system; and sorting the stored optimization results to determine one or more best optimization results of the stored optimization results. | 06-19-2014 |
20140173608 | APPARATUS AND METHOD FOR PREDICTING PERFORMANCE ATTRIBUTABLE TO PARALLELIZATION OF HARDWARE ACCELERATION DEVICES - Disclosed herein are an apparatus and method for predicting performance attributable to the parallelization of hardware acceleration devices. The apparatus includes a setting unit, an operation unit, and a prediction unit. The setting unit divides the time it takes to perform a task into a plurality of task stages and processing stages, and sets one of a parallelization index and target performance. The operation unit calculates the times it takes to perform the stages, and calculates at least one of the ratio of a target parallelization stage in the task and a speed improvement value. The prediction unit calculates an expected performance value or a parallelization index based on at least one of the calculated the times it takes to perform the stages, the calculated ratio of the target parallelization stage, the calculated speed improvement value, and the set target performance. | 06-19-2014 |
20140173609 | TASK PROCESSING APPARATUS AND METHOD - The present invention discloses a task processing apparatus and a method, and belongs to the field of radio communications technologies. The method includes: obtaining, by a task processing apparatus, one or more configured tasks, and selecting a task to be scheduled from the one or more tasks; and processing the task to be scheduled according to control parameters of the task to be scheduled to obtain a processing result, outputting the processing result of the task to be scheduled, and, according to the control parameters of the task to be scheduled, scheduling a next-level task processing apparatus to process the task to be scheduled. In the present invention, the task processing apparatus selects the task to be scheduled from the one or more configured tasks, and then processes the task to be scheduled in real time according to the control parameters of the task to be scheduled. | 06-19-2014 |
20140181822 | Fragmented Channels - A system, method and a computer-readable medium for task scheduling using fragmented channels is provided. A plurality of fragmented channels are stored in memory accessible to a plurality of compute units. Each fragmented channel is associated with a particular compute unit. Each fragmented channel also stores a plurality of data items from tasks scheduled for processing on the associated compute unit and links to another fragmented channel in the plurality of fragmented channels. | 06-26-2014 |
20140181823 | PROXY QUEUE PAIR FOR OFFLOADING - A method for offloading includes a host channel adapter (HCA) receiving a first work request identifying a queue pair (QP), making a first determination that the QP is a proxy QP, and offloading the first work request to a proxy central processing unit (CPU) based on the first determination and based on the first work request satisfying a filter criterion. The HCA further receives a second work request identifying the QP, processes the second work request without offloading based on the QP being a proxy QP and based on the first work request failing to satisfy the filter criterion. The HCA redirects a first completion for the first work request and a second completion for the second work request to the proxy CPU based on the first determination. The proxy CPU processes the first completion and the second completion in order. | 06-26-2014 |
20140181824 | QOS INBAND UPGRADE - Systems and methods for upgrading QoS levels of older transactions based on the presence of higher level QoS transactions in a given queue. A counter may be maintained to track the number of transactions in a queue that are assigned a corresponding QoS level. Each separate QoS level can have a corresponding counter. When a transaction is received by the queue, the counter corresponding to the QoS level of the transaction is incremented. When a transaction leaves the queue, the transaction is upgraded to the highest QoS level with a non-zero counter. Also, when the transaction leaves the queue, the counter corresponding to the original QoS level of the transaction is decremented. | 06-26-2014 |
20140181825 | Assigning Jobs to Heterogeneous Processing Modules - A processing system is described which assigns jobs to heterogeneous processing modules. The processing system assigns jobs to the processing modules in a manner that attempts to accommodate the service demands of the jobs, but without advance knowledge of the service demands. In one case, the processing system implements the processing modules as computing units that have different physical characteristics. Alternatively, or in addition, the processing system may implement the processing modules as threads that are executed by computing units. Each thread which runs on a computing unit offers a level of performance that depends on a number of other threads that are simultaneously being executed by the same computing unit. | 06-26-2014 |
20140181826 | DYNAMIC EXECUTION LOG IN A DISTRIBUTED SYSTEM - Scheduling and dispatching jobs for a plurality of different entities. A method includes receiving at a work coordinator, one or more actions associated with a job. The method further includes storing in a log at the work coordinator, keyed on a job key, state for the one or more actions and a list of the one or more actions. The method further includes making calls to one or more worker processes to cause the worker process to perform actions associated with the job. As a result of making calls to one or more worker processes, the method further includes receiving at least one of a change to the list of remaining actions or the state. | 06-26-2014 |
20140189694 | MANAGING PERFORMANCE POLICIES BASED ON WORKLOAD SCALABILITY - Methods and systems may provide for identifying a workload associated with a platform and determining a scalability of the workload. Additionally, a performance policy of the platform may be managed based at least in part on the scalability of the workload. In one example, determining the scalability includes determining a ratio of productive cycles to actual cycles. | 07-03-2014 |
20140189695 | Methods for Packet Scheduling with Order in Software-Based Parallel Processing - A method for parallel processing implemented by a first core in a network unit, comprising locking an ingress queue if the ingress queue is not locked by another core, searching for an unlocked task queue from a first default subset of a plurality of task queues when the ingress queue is locked by another core, wherein the first subset is different from a second default subset of the plurality of task queues from which a second core begins a search for an unlocked task queue, and searching a remainder of the plurality of task queues for an unlocked task queue when all of the first default subset of task queues are locked and the ingress queue is locked. | 07-03-2014 |
20140189696 | FAILURE RATE BASED CONTROL OF PROCESSORS - A method of an aspect includes determining a different operational configuration for each of a plurality of different maximum failure rates. Each of the different maximum failure rates corresponds to a different task of a plurality of tasks. The method also includes enforcing a plurality of logic each executing a different task of the plurality of tasks to operate according to the different corresponding determined operational configuration. Other methods, apparatus, and systems are also disclosed. | 07-03-2014 |
20140189697 | METHOD AND APPARATUS FOR MANAGING APPLICATION PROGRAM - A method for managing application programs is provided. The method includes acquiring whether a daemonic application program is running in an operating system, collecting a memory occupancy of the daemonic application program periodically within a time period upon finding a daemonic application program; determining whether the daemonic application program is in a long-term standby status according to the memory occupancy of the daemonic application program, and closing the daemonic application program when the daemonic application program is in a long-term standby status. An application program management system is also provided. | 07-03-2014 |
20140196044 | SYSTEM AND METHOD FOR INCREASING THROUGHPUT OF A PaaS SYSTEM - Systems and methods are disclosed for managing the throughput of a platform as a service (PaaS) system. A plurality of PaaS nodes receives deployment jobs, such as from an interface by way of a load balancer. The PaaS nodes extract deployment actions and an action count and post the deployment actions to a queue. The PaaS nodes also initiate, in a coordinator, a counter for the deployment job. The PaaS nodes retrieve deployment actions from the queue and execute them, such as in one of a plurality of threads in a flexible thread pool. Upon completing the action, the PaaS nodes report update the counter corresponding to the deployment job of the action. When a counter for a deployment jobs reaches the action count for the job, completion is reported. | 07-10-2014 |
20140196045 | PROCESSOR AND PROGRAM EXECUTION METHOD CAPABLE OF EFFICIENT PROGRAM EXECUTION - A processor executes a plurality of tasks by switching a timeslot and iterating a plurality of timeslots. The processor includes a table in which tasks are defined in correspondence with timeslots. In the table, the number of timeslots to be held in one iteration is defined, for each of the timeslots a total time period during the predetermined number of iterations is designated, and a plurality of tasks are defined in correspondence with at least one of the timeslots. A timeslot is switched every time a predetermined period elapses. One task is selected and executed by referring to the table in correspondence with switching of timeslot. | 07-10-2014 |
20140201748 | TELEMATICS CONTROL UTILIZING RELATIONAL FORMULAS - A telematics unit is provided. The telematics unit includes a non-transitory computer-readable medium and a processor, configured to execute a telematics task manager application according to processor-executable instructions stored on the non-transitory computer-readable medium. The telematics task manager application comprises a plurality of tasks. Each task includes one or more subtasks. Each subtask including subtask-specific triggering logic for execution of one or more actions based on the triggering logic. The triggering logic is based on modular condition blocks. | 07-17-2014 |
20140201749 | USING CROWDSOURCING TO IMPROVE SENTIMENT ANALYTICS - A method and computer for managing analysis of sentiment is disclosed. A computer retrieves data used to perform the analysis of sentiment. The computer analyzes the data and the analysis of sentiment to determine if a gap exists requiring further processing to improve the analysis of sentiment. Responsive to a determination that the gap exists requiring further processing to improve the analysis of sentiment, the computer generates a task to address the gap. The computer then uses crowdsourcing to submit the generated task for processing. | 07-17-2014 |
20140208325 | SYSTEMS AND METHODS FOR MANAGING TASKS - Systems and methods for creating and sharing tasks over one or more networks are disclosed. In one embodiment, a system comprises a message retrieval module configured to retrieve electronic messages and parse them into a plurality of tasks. The system can also include a task creation module configured to process the message to identify task information and one or more task recipients. The task creation module can also be configured to create a task based on the identified task information. A task notification module can be configured to notify the one or more task recipients about the created task. The system may also include a multi-layer network management module configured to organize the tasks and task participants into multiple networks and clouds and into a federation of clouds. The system can also include a task analytics module programmed to analyze the tasks performed by users of the system. | 07-24-2014 |
20140208326 | FILE PRESENTING METHOD AND APPARATUS FOR A SMART TERMINAL - A file presenting method and apparatus for a smart terminal is provided. The method includes determining whether it is to present a thumbnail of a file according to a type of the file, and setting loading information of the file in a loading queue if it is by a user interface thread; acquiring the loading information from the loading queue, determining whether a cache of the smart terminal stores the thumbnail of the file, generating the thumbnail of the file in accordance with the loading information and storing the generated thumbnail to the cache of the smart terminal if the cache of the smart terminal does not, and acquiring the thumbnail of the file from the cache of the smart terminal if the cache of the smart terminal does by a loading thread acquiring; presenting the thumbnail of the file as an icon of the file. | 07-24-2014 |
20140215470 | Parallel Processing with Proactive Solidarity Cells - A method and apparatus for processing information in parallel uses autonomous computer processing cells to perform tasks needed by a central processing unit. Each cell in the system is connected through a switching fabric, which facilitates connections for data transfer and arbitration between all system resources. A cell has an agent, which is a software module that may be transferred through the switching fabric to a task pool containing the tasks. The agent searches within the task pool for available tasks that match the cell's instruction type. A task may be broken into threads that are to be executed sequentially or independently depending on recipes constructed by the central processing unit. Interdependent tasks within the task pool may be logically combined as needed by the recipe. A notification is sent from the task pool to the central processing unit when a task or task thread is completed. | 07-31-2014 |
20140215471 | CREATING A MODEL RELATING TO EXECUTION OF A JOB ON PLATFORMS - At least one benchmark is determined. The at least one benchmark is run on first and second computing platforms to generate platform profiles. Based on the generated platform profiles, a model is generated that characterizes a relationship between a MapReduce job executing on the first platform and the MapReduce job executing on the second platform, wherein the MapReduce job includes map tasks and reduce tasks. | 07-31-2014 |
20140215472 | TASK MANAGEMENT - An example of task management can include receiving a message between a sending user and a receiving user. The message can be analyzed to determine whether the message includes a requested task. A parameter of the requested task can be extracted from the message based on directives in the message. An update to a status of the task can be sent to the sending user and the receiving user. | 07-31-2014 |
20140215473 | OBJECTIVES OF OPERATIONS EXECUTING ACROSS ENVIRONMENTS - Disclosed herein are techniques for managing operations. A distribution of operations across a plurality of execution environments is determined in order to achieve a performance objective. Another distribution of the operations is determined, if the status of the execution environments renders the distribution suboptimal or incapable of achieving the performance objective. | 07-31-2014 |
20140215474 | IMPLEMENTING A WORKFLOW ON DATA ITEMS - A workflow container is established that is associated with a profile, where the profile specifies a set of actions. A set of data items can be received in the workflow container, and multiple actions can be performed on the set of data items based on the profile. In implementing the workflow, one example provides or a first set of operations are performed for a given data item in parallel with a second set of operations. The first set of operations communicate the first data item to at least a first destination, and the second set of operations communicate the given data item to at least a second destination. | 07-31-2014 |
20140215475 | SYSTEM AND METHOD FOR SUPPORTING WORK SHARING MUXING IN A CLUSTER - A system and method can provide efficient low-latency muxing between servers in the cluster. One such system can include a cluster of one or more high performance computing systems, each including one or more processors and a high performance memory. The cluster communicates over an InfiniBand network. The system can also include a middleware environment, executing on the cluster, which includes one or more application server instances. The system can include one or more selectors, wherein each said selector contains a queue of read-ready file descriptors. Furthermore, the system can include a shared queue, wherein the read-ready file descriptors in each said selector can be emptied into the shared queue. Additionally, a plurality of muxer threads operates to take work from said shared queue. | 07-31-2014 |
20140215476 | APPARATUS AND METHOD FOR SHARING FUNCTION LOGIC BETWEEN FUNCTIONAL UNITS, AND RECONFIGURABLE PROCESSOR THEREOF - An apparatus and method for sharing a function logic between functional units and a reconfigurable processor are provided. The apparatus for sharing a function logic may include a storage which is configured to store data which is received from two or more functional units in order to share one or more function logics, and an arbitrator which is configured, based on a scheduling rule, to transmit the data stored in the storage into the function logic. | 07-31-2014 |
20140215477 | REALIZING GRAPH PROCESSING BASED ON THE MAPREDUCE ARCHITECTURE - A method and device for realizing graph processing based on the MapReduce architecture is disclosed in the invention. The method includes the steps of: receiving an input file of a graph processing job; predicating a MapReduce task execution time distribution of the graph processing job using an obtained MapReduce task degree-execution time relationship distribution and a degree distribution of the graph processing job; and dividing the input file of the graph processing job into input data splits of MapReduce tasks according to the predicted MapReduce task execution time distribution of the graph processing job. | 07-31-2014 |
20140215478 | WORK MIGRATION IN A PROCESSOR - A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. A work product may be migrated between lookup engines to complete the rule matching process. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found. | 07-31-2014 |
20140223436 | METHOD, APPARATUS, AND SYSTEM FOR PROVIDING AND USING A SCHEDULING DELTA QUEUE - A contact center is described along with various methods and mechanisms for administering the same. Work assignment methods are disclosed that place tasks in bins by time intervals and are processed within a delta queue ring buffer. The delta queue ring buffer can assign the tasks by seconds and order tasks by interval for efficient handling, and then loop around to use the same bins. By using fixed intervals and a moving queue pointer, the scheduling delta queue solution allows for fast selection of the queue to insert and fast processing of the queues on timeout. The scheduling delta queue solution allows for the processing of at least, but not limited to one million tasks with only memory as a constraint. | 08-07-2014 |
20140223437 | METHOD AND ELECTRONIC DEVICE FOR THREAD SCHEDULING - A method for performing thread scheduling in an electronic device having a hardware processor configured for executing an operating system is provided. The operating system includes a thread scheduler and a queue manager. The method includes the following steps. In response to one of a plurality of predefined conditions is met, enable a virtual manager executed by the hardware processor. Receive a request by the thread scheduler for scheduling a thread to be executed. Mask the scheduler by the virtual manager from accessing a first queue including a plurality of first threads in a runnable state. Direct the scheduler to a first virtual queue including a first portion of the plurality of first threads in the first queue for selecting the thread to be executed. The first portion of the first threads associated with at least one application currently running. Schedule execution of the selected thread by the hardware processor. | 08-07-2014 |
20140223438 | METHOD FOR CONTROLLING TERMINAL - There is disclosed a method for controlling a terminal including starting to record tasks, creating a task list by recording the tasks, when a plurality of tasks are implemented sequentially, ending the recording of the task, wherein the task list comprises an interrupt task configured to pause the task implementation and to allow the next task implemented when there is an additional input, such that the task list configured of the series of the tasks used by the user frequently may be created and the task list may be implemented automatically, only to perform the functions used frequently performed quickly. | 08-07-2014 |
20140223439 | SUPERSCALAR CONTROL FOR A PROBABILITY COMPUTER - A method of executing operations in parallel in a probability processing system includes providing a probability processor for executing said operations; and providing a scheduler for identifying, from said operations, those operations that can be executed in parallel. Providing the scheduler includes compiling code written in a probability programming language, that includes both modeling instructions and instructions for scheduling. | 08-07-2014 |
20140223440 | METHODS AND SYSTEMS FOR DETERMINISTIC AND MULTITHREADED SCRIPT OBJECTS AND SCRIPT ENGINE - A computing device is configured to execute a first instance of a single-threaded script engine in a first thread associated with a first execution context, wherein the first instance of the single-threaded script engine accesses at least one shared script object through a first reference counted script base value object. The computing device is also configured to concurrently execute a second instance of the single-threaded script engine in a second thread_associated with a second execution context, wherein the second instance of the single-threaded script engine accesses the at least one shared script object through a second reference counted script base value object. The script engine does not switch between the execution contexts. | 08-07-2014 |
20140223441 | SYSTEMS AND METHODS FOR PROVIDING SAFE CONFLUENCE MODALITY - A system and method for providing a safe confluence modality in a mobile computing device are provided. The system and method include determining an application switch between a primary application and a secondary application, identifying the application switch as a Non-User Triggered Application (NUTA) switch based on the primary application and the secondary application, the NUTA switch corresponding to the application switch initiated by a non-user of the mobile computing device and capturing a user interaction provided after the NUTA switch. | 08-07-2014 |
20140223442 | Tracking Memory Accesses to Optimize Processor Task Placement - Implementations provide for tracking memory accesses to optimize processor task placement is disclosed. A method includes creating a page table (PT) hierarchy associated with a thread, wherein the PT hierarchy comprises identifying information of memory pages and access bits corresponding to each of the memory pages, setting the respective access bit of one or more of the memory pages accessed by the thread while the thread is executing, collecting access bit information from the PT hierarchy associated with the thread, wherein the access bit information comprises the set access bits in the PT hierarchy, determining, in view of the collected access bit information, memory access statistics for the thread, and utilizing, during runtime of the thread, the memory access statistics for the thread in a determination of whether to migrate the thread to another processing device during the runtime of the thread. | 08-07-2014 |
20140237474 | SYSTEMS AND METHODS FOR ORGANIZING DEPENDENT AND SEQUENTIAL SOFTWARE THREADS - Systems and methods are provided for the organization of dependent and sequential software threads running multiple threads of execution on a computing device in order to improve performance and reduce the complexity of thread management. Computing tasks, or jobs, are organized into job wrappers for ordered execution. In response to receiving a request to create a job wrapper, the computing device initializes the job wrapper; initializes a shared data table having a plurality of variables that can be accessed by software threads that comprise the job wrapper; setting a first variable in the plurality of variables to assign a dependency of one software thread to another software thread; finally executing the job wrapper. | 08-21-2014 |
20140237475 | SLEEP/WAKE WITH SUPPRESSION AND DONATED IMPORTANCE - A method and apparatus of a device that manages processes upon the device entering and waking from sleep mode is described. In an exemplary embodiment, the device receives a signal to wakeup the device from the sleep mode. The sleep mode includes a plurality of processes that were executing prior to the device being put into a sleep mode and the plurality of processes includes a suppressed process and an unsuppressed process. For each of the processes, the device resumes execution of that process if that process is an unsuppressed process and defers execution of the process if that process is a suppressed process. | 08-21-2014 |
20140245307 | Application and Situation-Aware Community Sensing - Techniques, systems, and articles of manufacture for application and situation-aware community sensing. A method includes processing one or more sensor data requirements for each of multiple sensing applications and one or more user preferences for sensing, determining a sensing strategy for multiple sensors corresponding to the multiple sensing applications based on the one or more sensor data requirements and the one or more user preferences for sensing, wherein said sensing strategy comprises logic for executing a sensing task, and scheduling a sensor duty cycle and a sampling frequency for each of the multiple sensors based on the sensing strategy needed to execute the sensing task. | 08-28-2014 |
20140245308 | SYSTEM AND METHOD FOR SCHEDULING JOBS IN A MULTI-CORE PROCESSOR - A multi-core processor, comprising a plurality of processor cores to process jobs, a multicore navigator coupled to the plurality of processor cores to evaluate a job for atomicity and, based on determining the job to have atomicity, to determine whether there is an atomic wait queue associated with the job's atomicity. Based on there being an atomic wait queue associated with the job's atomicity, the multicore navigator is to push the job to the atomic wait queue. | 08-28-2014 |
20140245309 | SYSTEM AND METHOD FOR TRANSFORMING A QUEUE FROM NON-BLOCKING TO BLOCKING - A system and method can use continuation-passing to transform a queue from non-blocking to blocking. The non-blocking queue can maintain one or more idle workers in a thread pool that is not accessible from outside of the non-blocking queue. The continuation-passing can eliminate one or more serialization points in the non-blocking queue, and allows a caller to manage the one or more idle workers in the thread pool from outside of the non-blocking queue. | 08-28-2014 |
20140245310 | METHOD OF PERFORMING TASKS ON A PRODUCTION COMPUTER SYSTEM AND DATA PROCESSING SYSTEM - A method of performing tasks on a production computer includes retrieving a one task description file stored on a task computer and containing a description of a task on a production computer, transferring the task description file from the task computer to a production computer, causing the production computer to check that the file is associated with at least one task stored on the production computer, performing the task associated with the file in the production computer using the file, if the association check was successful, wherein the task computer has open ports and the production computer keeps the ports closed so that access by a user of a first user group to the task computer is arranged, but access by a user of the group to the production computer is prevented while steps above are performed in a predetermined operating state of the production computer. | 08-28-2014 |
20140259016 | SYSTEM AND METHOD FOR RUNTIME SCHEDULING OF GPU TASKS - A method for scheduling work for processing by a GPU is disclosed. The method includes accessing a work completion data structure and accessing a work tracking data structure. Dependency logic analysis is then performed using work completion data and work tracking data. Work items that have dependencies are then launched into the GPU by using a software work item launch interface. | 09-11-2014 |
20140259017 | COMPUTING SYSTEM WITH CONTEXTUAL INTERACTION MECHANISM AND METHOD OF OPERATION THEREOF - A method of operation of a computing system includes: determining a context for performing a user-initiated action; determining an operational order based on the context for performing the user-initiated action; and generating an application order based on the operational order for implementing an execution file and a further executable file according to the application order to perform the user-initiated action through displaying on a device. | 09-11-2014 |
20140259018 | Backoff Job Queue Polling Mechanism - A backoff polling algorithm may use a minimum polling interval which represents an amount of time between repeated polls of a job step queue. When polled, the job step queue may indicate a number of job steps scheduled to execute currently. Additionally, the backoff polling algorithm may repeatedly poll the job step queue at the current polling interval and execute any job steps indicated until the step queue indicates that the number of job steps scheduled to execute currently is below a minimum threshold. While the indicated number of job steps is below the minimum threshold, the backoff polling algorithm may repeatedly increase the polling interval up to a predetermined maximum polling interval and poll at each increased interval until the indicated number of job steps is above the minimum threshold. The backoff polling algorithm may then decrease the polling interval to the minimum polling interval. | 09-11-2014 |
20140259019 | TEN LEVEL ENTERPRISE ARCHITECTURE HIERARCHICAL EXTENSIONS - The HIERARCHICAL EXTENSIONS enhances the TEN-LEVEL ENTERPRISE ARCHITECTURE SYSTEMS AND TOOLS by empowering enterprises to construct, standardize, execute, measure and improve execution across any level of enterprise activity. This continuation in process includes:
| 09-11-2014 |
20140259020 | SCHEDULER AND SCHEDULING METHOD FOR RECONFIGURABLE ARCHITECTURE - A scheduler and scheduling method perform scheduling for a reconfigurable architecture. The scheduling, performed by the scheduler, includes path information extracting including extracting direct path information and indirect path information between functional units in a reconfigurable array complying with predefined architecture requirements, based on architecture information of the reconfigurable array, command selecting including selecting a command from a data flow graph (DFG) showing commands to be executed by the reconfigurable array, and scheduling including scheduling the selected command based on the extracted direct path information and indirect path information. | 09-11-2014 |
20140282557 | Responding To A Timeout Of A Message In A Parallel Computer - Methods, apparatuses, and computer program products for responding to a timeout of a message in a parallel computer are provided. The parallel computer includes a plurality of compute nodes operatively coupled for data communications over one or more data communications networks. Each compute node includes one or more tasks. Embodiments include a first task of a first node sending a message to a second task on a second node. Embodiments also include the first task sending to the second node a command via a parallel operating environment (POE) in response to a timeout of the message. The command instructs the second node to perform a timeout motivated operation. | 09-18-2014 |
20140282558 | SERIALIZING WRAPPING TRACE BUFFER VIA A COMPARE-AND-SWAP INSTRUCTION - Embodiments of the disclosure serializing wrapping of a circularly wrapping trace buffer via a compare-and-swap (CS) instruction by a method including executing a CS loop to advance to a location in the buffer indicated by a next free pointer. The method also includes incrementing a master wrap sequence number each time the next free pointer returns to a top of the buffer and executing another CS loop to increment a wrap number stored in a trace block corresponding to the location indicated by the next free pointer. Based upon determining that the wrap number stored in the trace block is one less than or equal to the master wrap sequence number, the method includes reserving space in a buffer associated with the trace block and storing the wrap number stored in the trace block as an old wrap number and incrementing a use-count of the trace block. | 09-18-2014 |
20140282559 | COMPUTING SYSTEM WITH TASK TRANSFER MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: a status module configured to determine a process profile for capturing a pause point in processing a task; a content module, coupled to the status module, configured to identify a process content for capturing the pause point; an upload module, coupled to the content module, configured to store the process profile and the process content; and a trigger synthesis module, coupled to the upload module, configured to generate a resumption-trigger with a control unit when storing the process profile and the process content for resuming the task from the pause point and for displaying on a device. | 09-18-2014 |
20140282560 | Mapping Network Applications to a Hybrid Programmable Many-Core Device - A hybrid programmable logic is described that performs packet processing functions on received data packets using programmable logic elements, and processors interleaved with the programmable logic elements. The header data may be scheduled for distribution to processing threads associated with the processors by the programmable logic elements. The processors may perform packet processing functions on the header data using both the processing threads and hardware acceleration functions provided by the programmable logic elements. | 09-18-2014 |
20140282561 | COMPUTER SYSTEMS AND METHODS WITH RESOURCE TRANSFER HINT INSTRUCTION - A processing system includes a processor configured to execute a plurality of instructions corresponding to a task, wherein the plurality of instructions comprises a resource transfer instruction to indicate a transfer of processing operations of the task from the processor to a different resource and a hint instruction which precedes the resource transfer instruction by a set of instructions within the plurality of instructions. A processor task scheduler is configured to schedule tasks to the processor, wherein, in response to execution of the hint instruction of the task, the processor task scheduler finalizes selection of a next task and loads a context of the selected next task into a background register file. The loading occurs concurrently with execution of the set of instructions between the hint instruction and resource transfer instruction, and, after loading is completed, the processor switches to the selected task in response to the resource transfer instruction. | 09-18-2014 |
20140282562 | FAST AND SCALABLE CONCURRENT QUEUING SYSTEM - This disclosure is directed to a fast and scalable concurrent queuing system. A device may comprise, for example, at least a memory module and a processing module. The memory module may be to store a queue comprising at least a head and a tail. The processing module may be to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination. | 09-18-2014 |
20140282563 | DEPLOYING PARALLEL DATA INTEGRATION APPLICATIONS TO DISTRIBUTED COMPUTING ENVIRONMENTS - System, method, and computer program product to process parallel computing tasks on a distributed computing system, by computing an execution plan for a parallel computing job to be executed on the distributed computing system, the distributed computing system comprising a plurality of compute nodes, generating, based on the execution plan, an ordered set of tasks, the ordered set of tasks comprising: (i) configuration tasks, and (ii) execution tasks for executing the parallel computing job on the distributed computing system, and launching a distributed computing application to assign the tasks of the ordered set of tasks to the plurality of compute nodes to execute the parallel computing job on the distributed computing system. | 09-18-2014 |
20140282564 | THREAD-SUSPENDING EXECUTION BARRIER - An energy-efficient execution barrier for parallel processing is provided. The execution barrier associates a thread-execution bit with each hardware-supported thread. The energy-efficient execution barrier utilizes a per-processor or per-chip bit vector register, having, for example, one bit per possible thread. A bit enables or disables the execution of its corresponding thread. A process starts by forking threads and enabling them in the bit vector register. When a thread arrives at the barrier/rendezvous, the thread disables its own bit and therefore suspends thread execution. When a distinguished thread arrives at the barrier, it waits (e.g., spinlocks) until all the threads needed for the rendezvous are disabled. The distinguished thread (or an automatic thread re-enable mechanism) then atomically sets all threads bits in the bit vector register to enabled, and the threads perform any appropriate sync operations and continue. | 09-18-2014 |
20140282565 | Processor Scheduling With Thread Performance Estimation On Core Of Different Type - A processor is described having an out-of-order core to execute a first thread and a non-out-of-order core to execute a second thread. The processor also includes statistics collection circuitry to support calculation of the following: the first thread's performance on the out-of-order core; an estimate of the first thread's performance on the non-out-of-order core; the second thread's performance on the non-out-of-order core; an estimate of the second thread's performance on the out-of-order core. | 09-18-2014 |
20140282566 | SYSTEM AND METHOD FOR HARDWARE SCHEDULING OF INDEXED BARRIERS - A method and a system are provided for hardware scheduling of indexed barrier instructions. Execution of a plurality of threads to process instructions of a program that includes a barrier instruction is initiated and when each thread reaches the barrier instruction, the thread pauses execution of the instructions. A first sub-group of threads in the plurality of threads is associated with a first sub-barrier index and a second sub-group of threads in the plurality of threads is associated with a second sub-barrier index. When the barrier instruction can be scheduled for execution, threads in the first sub-group are executed serially and threads in the second sub-group are executed serially and at least one thread in the first sub-group is executed in parallel with at least one thread in the second sub-group. | 09-18-2014 |
20140282567 | TASK SCHEDULING BASED ON USER INTERACTION - Provided herein are systems, methods, and software for implementing information management applications. In an implementation, at least a portion of an information management application is embodied in program instructions that include various task modules and a scheduler module. In some implementations the program instructions are written in accordance with a single threaded programming language, such as JavaScript or any other suitable single threaded language. When executed, each task module returns control to the scheduler module upon completing. The scheduler module identifies to which of the plurality of task modules to grant control based at least in part on a relevance of each task module to a user interaction. | 09-18-2014 |
20140282568 | Dynamic Library Replacement - Provided are techniques for an OS to be modified on a running system such that running programs, including system services, so not have to be stopped and restarted for the modification to take effect. The techniques include detecting, by a processing thread, when the processing thread has entered a shared library; in response to the detecting, setting a thread flag corresponding to the thread in an operating system (OS); detecting an OS flag, set by the OS, indicating that the OS is updating the shared library; in response to detecting the OS flag, suspending processing by the processing thread and transferring control from the thread to the OS; resuming processing by the processing thread in response to detecting that the OS has completed the updating; and executing the shared library in response to the resuming. | 09-18-2014 |
20140282569 | INFORMATION PROCESSING DEVICE, NETWORK SYSTEM, PROCESSING EXECUTION METHOD, AND PROCESSING EXECUTION COMPUTER PROGRAM PRODUCT - An information processing device includes: a reception unit that receives a workflow definition specifying processing; a rule acquisition unit that acquires, regarding the processing, a workflow rule capable of setting therein a parameter indicating which processing is to be executed; a setting unit that sets the parameter of the workflow rule based on the workflow definition; and an execution control unit that controls execution of the processing in accordance with the workflow rule in which the parameter is set. | 09-18-2014 |
20140282570 | DYNAMIC CONSTRUCTION AND MANAGEMENT OF TASK PIPELINES - A system and method are disclosed for managing the execution of tasks. Each task in a first set of tasks included in a pipeline is queued for parallel execution. The execution of the tasks is monitored by a dispatching engine. When a particular task that specifies a next set of tasks in the pipeline to be executed has completed, the dispatching engine determines whether the next set of tasks can be executed before the remaining tasks in the first set of tasks have completed. When the next set of tasks can be executed before the remaining tasks have completed, the next set of tasks is queued for parallel execution. When the next set of tasks cannot be executed before the remaining tasks have completed, the next set of tasks is queued for parallel execution only after the remaining tasks have completed. | 09-18-2014 |
20140282571 | MANAGING WORKFLOW APPROVAL - A method, computer program product, and system is described. A target completion date for approval of a content item is identified. One or more approvers associated with a sequence of approval for the content item are identified. A recommended completion date for the content item is determined based upon, at least in part, historical workflow data. Whether timely completion of the approval of the content item is likely is determined based upon, at least in part, comparing the target completion date with the recommended completion date. | 09-18-2014 |
20140298343 | METHOD AND SYSTEM FOR SCHEDULING ALLOCATION OF TASKS - A method and system for scheduling allocation of a plurality of tasks to a service platform is disclosed. The method includes allocating a current batch of tasks from the plurality of tasks to the service platform based on an optimization model. The method further includes updating the optimization model after at least one of an expiry of a predefined time interval or receiving the responses for the current batch of tasks. | 10-02-2014 |
20140304708 | DISTRIBUTED APPLICATION EXECUTION IN A HETEROGENEOUS PROCESSING SYSTEM - A method for distributing execution of a computer program to a plurality of hardware architectures of different types including: analyzing the computer program to identify a plurality of execution boundaries; selecting one or more execution boundaries from the plurality of execution boundaries; linking the computer program to the selected one or more execution boundaries; executing the computer program with linked execution boundaries; saving a hardware agnostic state of the execution of the computer program, when the execution encounters a boundary from the selected one or more execution boundaries; and transmitting the hardware agnostic state to a remote hardware architecture to be executed on the remote hardware architecture, responsive to the hardware agnostic state. | 10-09-2014 |
20140304709 | Hardware Assisted Method and System for Scheduling Time Critical Tasks - A method and system for scheduling a time critical task. The system may include a processing unit, a hardware assist scheduler, and a memory coupled to both the processing unit and the hardware assist scheduler. The method may include receiving timing information for executing the time critical task, the time critical task executing program instructions via a thread on a core of a processing unit and scheduling the time critical task based on the received timing information. The method may further include programming a lateness timer, waiting for a wakeup time to obtain and notifying the processing unit of the scheduling. Additionally, the method may include executing, on the core of the processing unit, the time critical task in accordance with the scheduling, monitoring the lateness timer, and asserting a thread execution interrupt in response to the lateness timer expiring, thereby suspending execution of the time critical task. | 10-09-2014 |
20140304710 | UPDATING A WORKFLOW WHEN A USER REACHES AN IMPASSE IN THE WORKFLOW - Provided are a method, system, and article of manufacture for updating a workflow when a user reaches an impasse in the workflow. A workflow program processes user input at a current node in a workflow comprised of nodes and workflow paths connecting the nodes, and wherein the user provides user input to traverse through at least one workflow path to reach the current node. The workflow program processes user input at the current node to determine whether there is a next node in the workflow for the processed user input. The workflow program transmits information on the current node to an analyzer in response to determining that there is no next node in the workflow. The analyzer processes the information on the current node to determine whether there are modifications to the current node. The analyzer transmits to the workflow program an update including the determined modifications to the current node in response to determining the modification. | 10-09-2014 |
20140310712 | SEQUENTIAL COOPERATION BETWEEN MAP AND REDUCE PHASES TO IMPROVE DATA LOCALITY - Methods and arrangements for task scheduling. A job is accepted, the job comprising a plurality of phases, each of the phases comprising at least one task. For each of a plurality of slots, a fetching cost associated with receipt of one or more of the tasks is determined. The slots are grouped into a plurality of sets. A pair of thresholds is determined for each of the sets, the thresholds being associated with the determined fetching costs and comprising upper and lower numerical bounds for guiding receipt of one or more of the tasks. Other variants and embodiments are broadly contemplated herein. | 10-16-2014 |
20140310713 | DISPLAY OBJECT PRE-GENERATION - In one embodiment, a computing device identifies a portion of a display object to pre-generate. The device may monitor a thread to identify the next upcoming window of idle time (i.e., the next opportunity when the thread will be idle for a minimum period of time). The device may add one or more selected pre-generation tasks to a message queue for execution by the thread during the window. The device may execute the one or more selected pre-generation tasks in the message queue by pre-generating at least one selected element of a display object with content for a portion of the content layout, and then return the display object. | 10-16-2014 |
20140310714 | PREDICTIVE DIAGNOSIS OF SLA VIOLATIONS IN CLOUD SERVICES BY SEASONAL TRENDING AND FORECASTING WITH THREAD INTENSITY ANALYTICS - Data can be categorized into facts, information, hypothesis, and directives. Activities that generate certain categories of data based on other categories of data through the application of knowledge which can be categorized into classifications, assessments, resolutions, and enactments. Activities can be driven by a Classification-Assessment-Resolution-Enactment (CARE) control engine. The CARE control and these categorizations can be used to enhance a multitude of systems, for example diagnostic system, such as through historical record keeping, machine learning, and automation. Such a diagnostic system can include a system that forecasts computing system failures based on the application of knowledge to system vital signs such as thread or stack segment intensity and memory heap usage. These vital signs are facts that can be classified to produce information such as memory leaks, convoy effects, or other problems. Classification can involve the automatic generation of classes, states, observations, predictions, norms, objectives, and the processing of sample intervals having irregular durations. | 10-16-2014 |
20140310715 | Modeling and Consuming Business Policy Rules - Concepts and technologies are described herein for modeling and consuming business policy rules. A policy server executes a policy application for modeling and storing the business policy rules. The business policy rules are modeled and stored in a data storage device according to an extensible policy framework architecture that can be tailored by administrators or other entities to support business-specific needs and/or operations. The modeled business policy rules can be used to support enforcement of business policy rules against various business operations, as well as allowing histories and/or other audits of business policy rules to be completed based upon information stored as the business policy rules. | 10-16-2014 |
20140310716 | COMMUNICATION CONTROL METHOD AND RECORDING - A non-transitory computer-readable recording medium stores a program, which when processed by a processor, causes a computer to serve as components of the computer. The components include a recording unit to record information indicating whether a communication requesting a first process is being performed in a transmitting source, the transmitting source performing the communication requesting the first process that takes a long time to complete and a communication requesting a second process that takes a time shorter than that of the first process, and a simultaneous connection control unit to control a number of communications requesting the first process by referring to the information stored in the recording unit such that at least one communication requesting the second process is performed in the transmitting source when a number of simultaneously occurring communications allowed to be performed is restricted in the transmitting source of the communications and a receiving destination. | 10-16-2014 |
20140310717 | Intelligent Data Storage and Processing Using FPGA Devices - A re-configurable logic device such as a field programmable gate array (FPGA) can be used to deploy a data processing pipeline, the pipeline comprising a plurality of pipelined data processing engines, the plurality of pipelined data processing engines being configured to perform processing operations, wherein the pipeline comprises a multi-functional pipeline, and wherein the re-configurable logic device is further configured to controllably activate or deactivate each of the pipelined data processing engines in the pipeline in response to control instructions and thereby define a function for the pipeline, each pipeline function being the combined functionality of each activated pipelined data processing engine in the pipeline. | 10-16-2014 |
20140317627 | SCHEDULING APPARATUS AND METHOD OF DYNAMICALLY SETTING THE SIZE OF A ROTATING REGISTER - A scheduling apparatus for dynamically setting a size of a rotating register of a local register file during runtime ids provided. The scheduling apparatus may include a determiner configured to determine whether a non-rotating register of a central register file is sufficient to schedule a program loop; a selector configured to select at least one local register file to which a needed non-rotating register is allocated in response to a determination that the non-rotating register of a central register file has a size which is sufficient to loop a program loop; a scheduler configured to schedule a non-rotating register of the at least one selected local register file. | 10-23-2014 |
20140317628 | MEMORY APPARATUS FOR PROCESSING SUPPORT OF LONG ROUTING IN PROCESSOR, AND SCHEDULING APPARATUS AND METHOD USING THE MEMORY APPARATUS - Provided are a scheduling apparatus and method for effective processing support of long routing in a coarse grain reconfigurable array (CGRA)-based processor. The scheduling apparatus includes: an analyzer configured to analyze a degree of skew in a data flow of a program; a determiner configured to determine whether operations in the data flow utilize a memory spill based on the analyzed degree of skew; and an instruction generator configured to eliminate dependency between the operations that are determined to utilize the memory spill, and to generate a memory spill instruction. | 10-23-2014 |
20140317629 | CONTROLLING TASKS PERFORMED BY A COMPUTING SYSTEM - Controlling tasks includes: receiving ordering information that specifies at least a partial ordering among a plurality of tasks; and generating instructions for performing at least some of the tasks based at least in part on the ordering information. Instructions are stored for executing a first subroutine corresponding to a first task, including a first control section that controls execution of at least a second subroutine corresponding to a second task, the first control section including a function configured to change state information associated with the second task, and to determine whether or not to initiate execution of the second subroutine based on the changed state information. Instructions are stored for executing the second subroutine, including a task section for performing the second task and a second control section that controls execution of a third subroutine corresponding to a third task. | 10-23-2014 |
20140317630 | DATA PROCESSING SYSTEM WITH DATA TRANSMIT CAPABILITY - A data processing system with data transmit capability comprising an operating system for supporting processes, such that the process are associated with one or more resources and the operating system being arranged to police the accessing by processes of resources so as to inhibit a process from accessing resources with which it is not associated. Part of this system is an interface for interfacing between each process and the operating system and a memory for storing state information for at least one process. The interface may be arranged to analyze instructions from the processes to the operating system, and upon detecting an instruction to re-initialize a process cause state information corresponding to that pre-existing state information to be stored in the memory as state information for the re-initialized process and to be associated with the resource. | 10-23-2014 |
20140325516 | DEVICE FOR ACCELERATING THE EXECUTION OF A C SYSTEM SIMULATION - A device is provided for accelerating, on a platform comprising a plurality of processing units, the execution of a SystemC simulation of a system, said simulation comprising a SystemC kernel and SystemC processes. The device comprises hardware means for scheduling the SystemC processes on the processing units in a dynamic manner during the execution of the simulation, these means making it possible notably to preempt the processing units. | 10-30-2014 |
20140325517 | SERVER SYSTEM, METHOD FOR CONTROLLING THE SAME, AND PROGRAM FOR EXECUTING PARALLEL DISTRIBUTED PROCESSING - If the number of task attempts has not exceeded the maximum number of attempts, a server system transmits a regular job to cause tasks to execute a particular process, and if the number of task attempts has exceeded the maximum number of attempts, the server system transmits a failed job to cause the tasks to execute post-processing corresponding to the particular process. | 10-30-2014 |
20140325518 | METHOD AND DEVICE FOR MANAGING MEMORY OF USER DEVICE - A method and a device dynamically managing background processes according to a memory status so as to efficiently use the memory in a user device supporting a multitasking operating system. The method includes determining reference information for adjustment of the number of background processes; identifying a memory status based on the reference information; and adjusting the number of the background processes in correspondence to the memory status. | 10-30-2014 |
20140331230 | Remote Task Queuing by Networked Computing Devices - The described embodiments include a networking subsystem in a second computing device that is configured to receive a task message from a first computing device. Based on the task message, the networking subsystem updates an entry in a task queue with task information from the task message. A processing subsystem in the second computing device subsequently retrieves the task information from the task queue and performs the corresponding task. In these embodiments, the networking subsystem processes the task message (e.g., stores the task information in the task queue) without causing the processing subsystem to perform operations for processing the task message. | 11-06-2014 |
20140331231 | HARDWARE TASK MANAGER - A hardware task manager for an adaptive computing system. The adaptive computing system includes a plurality of computing nodes including an execution unit configured to execute tasks. An interconnection network is operatively coupled to the plurality of computing nodes to provide interconnections among the plurality of computing nodes. The hardware task manager manages execution of the tasks by the execution unit. | 11-06-2014 |
20140331232 | PORTABLE MULTIMEDIA DEVICE AND OPERATING METHOD THEREFOR - Disclosed are a portable multimedia device and an operating method therefor. The portable multimedia device comprises a Flash decoder and a system function decoder. The operating method comprises: a Flash decoder parsing a loaded swf application compiled by using a Flash development tool, and presenting a Flash interactive interface, the swf application comprising a script function for implementing different device operations; in a parsing procedure, determining a script function required to be invoked, and when the script function required to be invoked has an expanded script function identifier, determining the script function required to be invoked as an expanded script function; according to set correspondence between an expanded script function and a function pointer, determining a function pointer corresponding to the script function required to be invoked, different function pointers pointing to different system functions; and triggering a system function decoder to invoke and execute a system function to which the determined function pointer points, thereby implementing a device operation. This disclosure supports an application implementing a full flash interactive interface, provides a high-quality interactive movement sense, and improves user experience. | 11-06-2014 |
20140337848 | LOW OVERHEAD THREAD SYNCHRONIZATION USING HARDWARE-ACCELERATED BOUNDED CIRCULAR QUEUES - A first thread is placed into a blocked state by causing the thread to perform a blocking pop operation on a hardware-accelerated, single-entry queue. When a synchronization event completes, a second thread may release the first thread from the blocked state pushing a data value onto the hardware accelerated, single-entry queue. The push operation satisfies the blocking pop operation, and the first thread is released. | 11-13-2014 |
20140337849 | APPARATUS AND JOB SCHEDULING METHOD THEREOF - An apparatus and a job scheduling method are provided. For example, the apparatus is a multi-core processing apparatus. The apparatus and method minimize performance degradation of a core caused by sharing resources by dynamically managing a maximum number of jobs assigned to each core of the apparatus. The apparatus includes at least one core including an active cycle counting unit configured to store a number of active cycles and a stall cycle counting unit configured to store a number of stall cycles and a job scheduler configured to assign at least one job to each of the at least one core, based on the number of active cycles and the number of stall cycles. When the ratio of the number of stall cycles to a number of active cycles for a core is too great, the job scheduler assigns fewer jobs to that core to improve performance. | 11-13-2014 |
20140337850 | SYSTEM AND METHOD FOR PARALLEL PROCESSING USING DYNAMICALLY CONFIGURABLE PROACTIVE CO-PROCESSING CELLS - A parallel processing architecture includes a CPU, a task pool populated by the CPU, and a plurality of autonomous co-processing cells each having an agent configured to proactively interrogate the task pool to retrieve tasks appropriate for a particular so-processor. Each co-processor communicates with the task pool through a switching fabric, which facilitates connections for data transfer and arbitration between all system resources. Each so-processor notifies the task pool when a task or task thread is completed, whereupon the task pool notifies the CPU. | 11-13-2014 |
20140344816 | SYSTEM AND METHOD FOR RUNNING PHP INSTANCES - The present invention is a method of operating a series of software-based processes having the steps of creating a first PHP based parent instance, forming a multi-thread second PHP instance from the first instance, closing the first instance, and operating an application on the second instance. | 11-20-2014 |
20140344817 | CONVERTING A HYBRID FLOW - Converting a hybrid flow can include combining each of a plurality of task nodes with a plurality of corresponding operators of the hybrid flow and converting the combined plurality of task nodes and the plurality of corresponding operators of the hybrid flow to a data flow graph using a code template. | 11-20-2014 |
20140344818 | TASK SCHEDULER, MICROPROCESSOR, AND TASK SCHEDULING METHOD - A task scheduler scheduling running units to execute a plurality of tasks is provided. The task scheduler includes a time control portion having a common time to control a state of the plurality of tasks, and a task calculator calculating a slack disappearance time for each of the plurality of tasks. An arrival time of one of the plurality of tasks is defined as T. A deadline time representing when the one of the plurality of tasks is required to be completed is defined as D. A worst case execution time predicted to be required for a completion of the one of the plurality of tasks is defined as W. A current elapsed time is defined as C. The slack disappearance time is expressed by S=T+D−W+C. A task having an earliest slack disappearance time from among the plurality of tasks is scheduled to be preferentially executed. | 11-20-2014 |
20140351817 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - An information processing apparatus has one or more programs to execute a workflow and receives an execution request for the workflow. The information processing apparatus includes a workflow storage unit to store therein one or more workflow definitions each defining a workflow including an execution sequence of one or more processes each executed by any of the one or more programs; a selecting unit to receive a selection of a workflow to be executed based on the workflow definitions stored in the workflow storage unit; an editing unit to edit the selected workflow to be executed in response to a user operation to edit the selected workflow; and a workflow controller to execute, in response to reception of an execution request for the edited workflow to be executed, the edited workflow by any of the one or more programs corresponding to a process included in the edited workflow. | 11-27-2014 |
20140351818 | METHOD AND SYSTEM FOR INPUT DRIVEN PROCESS FLOW MANAGEMENT - A method for input driven process flow management includes receiving a request, each request having an input and an output, identifying tasks for a process fit for the request type, receiving inputs; generating, based on the inputs and the process, a process flow step, and executing the process flow step to generate results (outputs). The method further includes receiving a second set of inputs, different than the first set of inputs, generating, based on the second set of inputs and the process, a second process flow step different than the first process flow, and executing the second process flow step to generate a second set of results, and so on, until all tasks are executed, or a termination task has been reached. | 11-27-2014 |
20140359628 | DYNAMICALLY ALTERING SELECTION OF ALREADY-UTILIZED RESOURCES - An approach to control workflow so that a relatively high priority work item can sometimes be automatically controlled by software to interrupt work being performed, by one or more resource unit(s), on a relatively lower priority work item. The analysis for deciding whether or not an interruption occurs depends upon interruptibility scalars (that is, interruptibility quotients and/or factors) and interruptibility threshold(s). | 12-04-2014 |
20140359629 | MECHANISM FOR ISSUING REQUESTS TO AN ACCELERATOR FROM MULTIPLE THREADS - An apparatus is described having multiple cores, each core having: a) a CPU; b) an accelerator; and, c) a controller and a plurality of order buffers coupled between the CPU and the accelerator. Each of the order buffers is dedicated to a different one of the CPU's threads. Each one of the order buffers is to hold one or more requests issued to the accelerator from its corresponding thread. The controller is to control issuance of the order buffers' respective requests to the accelerator. | 12-04-2014 |
20140359630 | Image Forming System for Managing Logs - An image forming system includes a log management unit and an operation state image generation unit. The log management unit manages a job log indicating a history of a job executed by an image forming apparatus, a log image indicating a history of an output image serving as an output target of the image forming apparatus for the job, and an operation log indicating a history of an operation input to an operation unit in the image forming apparatus for the job. The operation state image generation unit generates an operation state image indicating which of a plurality of operable items in the operation unit is operated in an operation included in the operation log. | 12-04-2014 |
20140359631 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING TERMINAL - An information processing system includes a detection unit configured to detect an end of a second application program that cooperates with a first application program operated by a user; a report unit configured to report the end to the first application program, and end the first application program; and a request unit configured to request a third application program to display information relevant to the end of the second application program. | 12-04-2014 |
20140366031 | MANAGEMENT OF BACKGROUND TASKS - Background tasks are managed through background task settings that allow or prevent the execution of agents associated with mobile device applications in the background of a mobile computing device. Background task management can extend the battery life of a mobile device and can be done by a user, the mobile device or a combination thereof. Agents scheduled for execution by a mobile device are executed according to the background task settings associated with the application. Background task settings can be controlled via background task control panels. Background task settings can be set on a system-wide, application or background task basis. Disabled background tasks can be enabled when the application is next launched. A user can be invited to navigate to the background task control panels when various events occur such as the battery life dropping below a threshold or the current power consumption exceeding a threshold. | 12-11-2014 |
20140373019 | GENERATING DIFFERENCES FOR TUPLE ATTRIBUTES - A sequence of tuples, each having one or more attributes, is received at one of one or more processing elements operating on one or more processors. Each processing element may have one or more stream operators. A first stream operator may be identified as one that only processes an instance of a first attribute in a currently received tuple when a difference between an instance of the first attribute in a previously received tuple and the instance of the first attribute in the currently received tuple is outside of a difference threshold. A second stream operator may generate a difference attribute from a first instance of the first attribute in a first one of the received tuples and a second instance of the first attribute in a second one of the received tuples. The difference attribute may be transmitted from the second stream operator to the first stream operator. | 12-18-2014 |
20140373020 | METHODS FOR MANAGING THREADS WITHIN AN APPLICATION AND DEVICES THEREOF - This technology relates to assigning a task to a current task queue based on one or more matching category when the new task is received within an application for execution. Availability of one or more existing idle threads within one or more thread groups required for the execution of the received task determined based on one or more utilization parameters, where each of the thread groups is associated with one or more task queues and where the current task queue is one of the task queues. One or more new threads are created to allocate for execution of the task when the existing idle threads are determined to be unavailable in the thread groups within the application. Next, the created new threads are allocated to the task when the existing idle threads are determined to be unavailable. The task is executed using the allocated new threads. | 12-18-2014 |
20140380319 | ADDRESS TRANSLATION/SPECIFICATION FIELD FOR HARDWARE ACCELERATOR - Embodiments relate an address translation/specification (ATS) field. An aspect includes receiving a work queue entry from a work queue in a main memory by a hardware accelerator, the work queue entry corresponding to an operation of the hardware accelerator that is requested by user-space software, the work queue entry comprising a first ATS field that describes a structure of the work queue entry. Another aspect includes, based on determining that the first ATS field is consistent with the operation corresponding to the work queue entry and the structure of the work queue entry, executing the operation corresponding to the work queue entry by the hardware accelerator. Another aspect includes, based on determining that the first ATS field is not consistent with the operation corresponding to the work queue entry and the structure of the work queue entry, rejecting the work queue entry by the hardware accelerator. | 12-25-2014 |
20140380320 | JOINT OPTIMIZATION OF MULTIPLE PHASES IN LARGE DATA PROCESSING - Methods and arrangements for task scheduling. A plurality of jobs is received, each job comprising at least a map phase, a copy/shuffle phase and a reduce phase. For each job, there are determined a map phase execution time and a copy/shuffle phase execution time. Each job is classified into at least one group based on at least one of: the determined map phase execution time and the determined copy/shuffle phase execution time. The plurality of jobs are executed via processor sharing, and the executing includes determining a similarity measure between jobs based on current job execution progress. Other variants and embodiments are broadly contemplated herein. | 12-25-2014 |
20140380321 | ENERGY EFFICIENT JOB SCHEDULING - The subject disclosure is directed towards scheduling jobs with a speed for running a processor(s) having variable speeds to save energy yet complete in time, in which the volume of the job is not known in advance, that is, in a non-clairvoyant setting. A non-clairvoyant algorithm uses an existing clairvoyant algorithm to determine the speed based upon information known from running one or more jobs, in full or in part. Also described is rounding jobs based upon their densities into rounding queues so that a hybrid of highest density first rules and FIFO rules may be used to obtain information used by the clairvoyant algorithm. | 12-25-2014 |
20140380322 | Task Scheduling for Highly Concurrent Analytical and Transaction Workloads - Systems and method for a task scheduler with dynamic adjustment of concurrency levels and task granularity are disclosed for improved execution of highly concurrent analytical and transactional systems. The task scheduler can avoid both over commitment and underutilization of computing resources by monitoring and controlling the number of active worker threads. The number of active worker threads can be adapted to avoid underutilization of computing resources by giving the OS control of additional worker threads processing blocked application tasks. The task scheduler can dynamically determine a number of parallel operations for a particular task based on the number of available threads. The number of available worker threads can be determined based on the average availability of worker threads in the recent history of the application. Based on the number of available worker threads, the partitionable operation can be partitioned into a number of sub operations and executed in parallel. | 12-25-2014 |
20140380323 | CONSISTENT MODELING AND EXECUTION OF TIME CONSTRUCTS IN BUSINESS PROCESSES - Embodiments are directed to executing a workflow using a virtualized clock and to ensuring idempotency and correctness among workflow processes. In one scenario, a computer system a computer system determines that a workflow session has been initialized. The workflow session runs as a set of episodes, where each episode includes one or more pulses of work that are performed when triggered by an event. Each workflow session is processed according to a virtualized clock that keeps a virtual session time for the workflow session. The computer system receives an event that includes an indication of the time the event was generated, and then accesses the received event to determine which pulses of work are to be performed as part of a workflow session episode. The computer system then executes the determined pulses of work according to the virtual session time indicated by the virtualized clock. | 12-25-2014 |
20140380324 | BURST-MODE ADMISSION CONTROL USING TOKEN BUCKETS - Methods and apparatus for burst-mode admission control using token buckets are disclosed. A work request (such as a read or a write) directed to a work target is received. Based on a first criterion, a determination is made that the work target is in a burst mode of operation. A token population of a burst-mode token bucket is determined, and if the population meets a second criterion, the work request is accepted for execution. | 12-25-2014 |
20140380325 | MULTIPROCESSOR SYSTEM - A multiprocessor system includes: a logical processor assigned to any one of physical processors to be executed on the multiprocessor system; and a scheduler managing the assignment of the logical processor to one of the first kind physical processor and the second kind physical processor. The logical processor has a flag for holding information indicating an internal state of the logical processor. The scheduler determines the assignment of the logical processor to one of the first kind physical processor and the second kind physical processor, based on presence or absence of an occurrence of a predetermined event and the information held in the flag. | 12-25-2014 |
20140380326 | COMPUTER PRODUCT, MULTICORE PROCESSOR SYSTEM, AND SCHEDULING METHOD - A non-transitory, computer-readable recording medium stores a scheduling program that causes a first core among multiple cores to execute a process that includes selecting a core from the cores; referring to a storage unit to assign first software assigned to the selected core, to a second core different from the selected core and among the cores, the storage unit being configured to store for each core among the cores, identification information of software assigned to the core; and assigning second software to the selected core as a result of assigning the first software to the second core, the second software being assigned when an activation request for the second software is accepted. | 12-25-2014 |
20150026685 | DEPENDENT INSTRUCTION SUPPRESSION - A method includes suppressing execution of at least one dependent instruction of a first instruction by a processor responsive to an invalid status of an ancestor load instruction associated with the first instruction. A processor includes an instruction pipeline having an execution unit to execute instructions, a load store unit for retrieving data from a memory hierarchy, and a scheduler unit. The scheduler unit selects for execution in the execution unit a first load instruction having at least one dependent instruction linked to the first load instruction for data forwarding from the load store unit and suppresses execution of a second dependent instruction of the first dependent instruction responsive to an invalid status of the first load instruction. | 01-22-2015 |
20150026686 | DEPENDENT INSTRUCTION SUPPRESSION IN A LOAD-OPERATION INSTRUCTION - A method includes suppressing execution of an operation portion of a load-operation instruction in a processor responsive to an invalid status of a load portion of load-operation instruction. A processor includes an instruction pipeline including an execution unit operable to execute instructions and a scheduler unit. The scheduler unit includes a scheduler queue and is operable to store a load-operation in the scheduler queue. The load-operation instruction includes a load portion and an operation portion. The scheduler unit schedules the load portion for execution in the execution unit, marks the operation portion in the scheduler queue as eligible for execution responsive to scheduling the load portion, receives an indication of an invalid status of the load portion, and suppresses execution of the operation portion responsive to the indication of the invalid status. | 01-22-2015 |
20150026687 | MONITORING SYSTEM NOISES IN PARALLEL COMPUTER SYSTEMS - Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored. | 01-22-2015 |
20150026688 | Systems and Methods for Adaptive Integration of Hardware and Software Lock Elision Techniques - Particular techniques for improving the scalability of concurrent programs (e.g., lock-based applications) may be effective in some environments and for some workloads, but not others. The systems described herein may automatically choose appropriate ones of these techniques to apply when executing lock-based applications at runtime, based on observations of the application in the current environment and with the current workload. In one example, two techniques for improving lock scalability (e.g., transactional lock elision using hardware transactional memory, and optimistic software techniques) may be integrated together. A lightweight runtime library built for this purpose may adapt its approach to managing concurrency by dynamically selecting one or more of these techniques (at different times) during execution of a given application. In this Adaptive Lock Elision approach, the techniques may be selected (based on pluggable policies) at runtime to achieve good performance on different platforms and for different workloads. | 01-22-2015 |
20150026689 | Independent Hit Testing - In one or more embodiments, a hit test thread which is separate from the main thread, e.g. the user interface thread, is utilized for hit testing on web content. Using a separate thread for hit testing can allow targets to be quickly ascertained. In cases where the appropriate response is handled by a separate thread, such as a manipulation thread that can be used for touch manipulations such as panning and pinch zooming, manipulation can occur without blocking on the main thread. This results in the response time that is consistently quick even on low-end hardware over a variety of scenarios. | 01-22-2015 |
20150026690 | METHOD FOR GENERATING A MACHINE HEARTBEAT - A method and system for generating a heartbeat of a process including at least one machine configured to perform a process cycle consisting of a plurality of timed events performed in a process sequence under an identified condition includes determining the duration of each of the timed events during the process cycle performed under the identified condition, ordering the durations of the plurality of timed events in the process sequence, and generating a heartbeat defined by the ordered durations of a process cycle. The identified condition may be one of a design intent, baseline, learnt, known, current or prior condition. The variance of the heartbeat between a first and at least a second identified condition may be analyzed to monitor and/or control the process or machine. The system may display the process heartbeat information and may generate a message in response to the heartbeat and/or variance thereof. | 01-22-2015 |
20150026691 | TASK SCHEDULING BASED ON DEPENDENCIES AND RESOURCES - An example system identifies a set of tasks as being designated for execution, and the set of tasks includes a first task and a second task. The example system accesses task dependency data that corresponds to the second task and indicates that the first task is to be executed prior to the second task. The example system, based on the task dependency data, generates a task dependency model of the set of tasks. The dependency model indicates that the first task is to be executed prior to the second task. The example system schedules an execution of the first task, which is scheduled to use a particular data processing resource. The scheduling is based on the dependency model. | 01-22-2015 |
20150033233 | JOB DELAY DETECTION METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus includes a monitoring unit and a predicting unit. The monitoring unit monitors, during execution of a first job, the amount of data output by the execution of the first job. The predicting unit predicts, based on the amount of data output by the execution of the first job, whether execution of a second job finishes by a preset time limit. The second job performs a process using the data output by the execution of the first job. | 01-29-2015 |
20150033234 | PROVIDING QUEUE BARRIERS WHEN UNSUPPORTED BY AN I/O PROTOCOL OR TARGET DEVICE - A host controller is provided that unilaterally supports queue barrier functionality. The host controller may receive a first task marked with a queue barrier indicator. As a result, the host controller stalls transmission of the first task to a target device. Additionally, the host controller also stalls transmission of any task, occurring after the first task, to the target device. The host controller only sends the first task to the target device once an indication is received from the target device that all previously sent tasks have been processed. The host controller only sends any task, occurring after the first task, to the target device once an indication is received from the target device that the first task has been processed. | 01-29-2015 |
20150033235 | Distributed Mechanism For Minimizing Resource Consumption - Example embodiments presented herein are directed towards multi-core processing providing in a distributed manner with an emphasis on power management. The example embodiments provide a processing node, and method therein, for the distribution of processing tasks, and energy saving mechanisms, which are performed autonomously. | 01-29-2015 |
20150033236 | PERIODIC ACCESS OF A HARDWARE RESOURCE - An example includes periodic access of a hardware resource of a computer by instructions in firmware of the computer that are executed by an interpreter in the context of the operating system without a driver. The access occurs in response to a periodic interrupt generated by a timer. | 01-29-2015 |
20150033237 | UTILITY-OPTIMIZED SCHEDULING OF TIME-SENSITIVE TASKS IN A RESOURCE-CONSTRAINED ENVIRONMENT - Systems and methods implementing utility-maximized scheduling of time-sensitive tasks in a resource constrained-environment are described herein. Some embodiments include a method for utility-optimized scheduling of computer system tasks performed by a processor of a first computer system that includes determining a time window including a candidate schedule of a new task to be executed on a second computer system, identifying other tasks scheduled to be executed on the second computer system within said time window, and identifying candidate schedules that each specifies the execution times for at least one of the tasks (which include the new task and the other tasks). The method further includes calculating an overall utility for each candidate schedule based upon a task utility calculated for each of the tasks when scheduled according to each corresponding candidate schedule and queuing the new task for execution according to a preferred schedule with the highest overall utility. | 01-29-2015 |
20150052529 | EFFICIENT TASK SCHEDULING USING A LOCKING MECHANISM - For efficient task scheduling using a locking mechanism, a new task is allowed to spin on the locking mechanism if a number of tasks spinning on the locking mechanism is less than a predetermined threshold for parallel operations requiring locks between the multiple threads. | 02-19-2015 |
20150052530 | TASK-BASED MODELING FOR PARALLEL DATA INTEGRATION - System, method, and computer program product to perform an operation for task-based modeling for parallel data integration, by determining, for a data flow, a set of processing units, each of the set of processing units defining one or more data processing operations to process the data flow, generating a set of tasks to represent the set of processing units, each task in the set of tasks comprising one or more of the data processing operations of the set of processing units, optimizing the set of tasks based on a set of characteristics of the data flow, and generating a composite execution plan based on the optimized set of tasks to process the data flow in a distributed computing environment. | 02-19-2015 |
20150052531 | MIGRATING JOBS FROM A SOURCE SERVER FROM WHICH DATA IS MIGRATED TO A TARGET SERVER TO WHICH THE DATA IS MIGRATED - Provided are a computer program product, system, and method for migrating jobs from a source server from which data is migrated to a target server to which the data is migrated. Mirrored data is copied from a source storage to a target storage. A determination is made of at least one eligible job of the jobs executing in the source server having execution characteristics indicating that the job is eligible for migration to the target server. The determined at least one eligible job is migrated to the target server to execute on the target server and perform operations with respect to the mirrored data in the target storage. The migrated eligible job is disabled at the source server. | 02-19-2015 |
20150052532 | PARALLEL COMPUTER SYSTEM, METHOD OF CONTROLLING PARALLEL COMPUTER SYSTEM, AND RECORDING MEDIUM - A management device includes: a memory; and a processor coupled to the memory. The processor executes a process including: storing therein an assignment table including first assignment information indicating whether a job is assigned to the information processing devices and second assignment information indicating that a job is constantly assigned to virtual information processing devices arranged at ends of a connection relation of the information processing devices; searching regions in which idle information processing devices assigned with no job are arranged continuously, using the assignment table stored at the storing; specifying a region appropriate for assignment of a job as an assignment target among the regions searched by the searching; and assigning the job as the assignment target to the region specified by the specifying. | 02-19-2015 |
20150052533 | MULTIPLE THREADS EXECUTION PROCESSOR AND OPERATING METHOD THEREOF - There is provided a multiple threads execution processor. The multiple threads execution processor includes a thread selector configured to select a first thread from among a plurality of threads for executing a program code, and a thread executor configured to execute the first thread selected by the thread selector, and execute a second thread selected by the thread selector from among the plurality of threads after completing execution of the first thread. | 02-19-2015 |
20150052534 | METHOD AND DEVICE FOR EXECUTING SCHEDULED TASKS, COMPUTER-READABLE STORAGE MEDIUM, GRAPHICAL USER INTERFACE AND MOBILE TERMINAL - A method and device for processing scheduled tasks. Computer readable storage medium, graphical user interface and mobile terminal are also provided. The method includes: acquiring current geographic location and current time of the mobile terminal; determining whether the current geographic location of the mobile terminal matches up to a trigger geographic location preset in the preset scheduled task; determining whether the current time of the mobile terminal matches up to a trigger time of the scheduled task; triggering the mobile terminal to execute the scheduled task, if the geographic location of the mobile terminal matches up to the trigger geographic location and the current time of the mobile terminal matches up to the trigger time of the scheduled task. By determining whether the current geographic location of the mobile terminal matches up to the trigger geographic location of the scheduled task and whether the current time of the mobile terminal matches up to the trigger time of the scheduled task and then to execute the scheduled task, it solves the complex problem that the user has to repeatedly set scheduled tasks in different locations so the mobile terminal can execute the scheduled task according to the current geographic location and time of the user. | 02-19-2015 |
20150058854 | Direct Memory Interface Access in a Multi-Thread Safe System Level Modeling Simulation - Methods, systems, and machine readable medium for multi-thread safe system level modeling simulation (SLMS) of a target system on a host system. An example of a SLMS is a SYSTEMC simulation. During the SLMS, SLMS processes are executed in parallel via a plurality of threads. SLMS processes represent functional behaviors of components within the target system, such as functional behaviors of processor cores. Deferred execution may be used to defer execution of operations of SLMS processes that access a shared resource. Multi-thread safe direct memory interface (DMI) access may be used by a SLMS process to access a region of the memory in a multi-thread safe manner. Access to regions of the memory may also be guarded if they are at risk of being in a transient state when being accessed by more than one SLMS process. | 02-26-2015 |
20150058855 | MANAGEMENT OF BOTTLENECKS IN DATABASE SYSTEMS - Management is provided for threads of a database system that is subject to a plurality of disparate bottleneck conditions for resources. A monitor thread retrieves, from a first thread, first monitor data for first bottleneck condition of a first type. The monitor thread compares the first monitor data to a trigger level for the first bottleneck condition and then determines, in response to the comparison of the first monitor data to the trigger level, a potential source of the first bottleneck condition. A potential blocker thread is identified based upon the potential source of the first bottleneck condition. The monitor thread retrieves, from the potential blocker thread, second monitor data for a second type of bottleneck condition that is different from the first type of bottleneck condition. Based upon monitor data, a blocking thread is identified, and a particular blocking solution is applied to the blocking thread. | 02-26-2015 |
20150058856 | Method and Apparatus Integrating Navigation and Saving the Writable State of Applications - The invention includes a computerized method responding to a navigation cue from a user by saving the writable state of the application and directing the computer through the window operating system to perform the navigation task 36 indicated by the navigation cue. The invention includes the following, which will each be discussed in turn. An alteration mechanism including means for altering window operating system by altering the hook triggered by each navigation cue to integrate saving the writable state. The window operating system integrating response to each navigation cue and saving the writable state. Source code artifacts which can be installed to implement navigation cues triggering saving the writable state. A business method generating revenue for a business entity. | 02-26-2015 |
20150067687 | Asynchronous, Interactive Task Workflows - A method of performing an asynchronous, interactive workflow is provided. The method includes generating a workflow comprising one or more tasks and executing at least a portion of the one or more tasks of the workflow automatically, without user interaction, and in response to a trigger. The method further includes detecting that a current task of the one or more tasks of the workflow requires user interaction, adding the current task to a to-do list of tasks requiring user interaction, and determining that one of an at least one user associated with the workflow has logged on, presenting at least one task from the to-do list to the user, receiving the required user interaction, and executing the at least one task from the to-do list based on the received user interaction. | 03-05-2015 |
20150067688 | METHOD AND APPARATUS FOR CONTROLLING JOB SCHEDULE - A controller apparatus obtains job history information including execution records of one or more jobs not registered in a scheduler. Based on the job history information, the controller apparatus then estimates resource usage during execution time periods initially scheduled for jobs registered in the scheduler. This resource usage includes that of at least one of the jobs not registered in the scheduler which is to be executed together with the registered jobs on an information processing apparatus. When the estimated resource usage satisfies predetermined conditions, the controller apparatus schedules jobs including the registered jobs and the at least one of the non-registered jobs. | 03-05-2015 |
20150067689 | METHOD, SYSTEM, AND PROGRAM FOR SCHEDULING JOBS IN A COMPUTING SYSTEM - Embodiments of the present invention include a job scheduling system configured to schedule job execution timings in a computing system; the job scheduling system comprising: a job information receiving module configured to receive job information defining a job pending execution in the computing system, the job information including an indication of computing hardware resources required to execute the job, and an indication of an allocation of application licenses required to execute the job; and a job execution scheduler configured to schedule execution of the job at a timing determined in dependence upon the availability of both the indicated computing hardware resources and the indicated application licenses. | 03-05-2015 |
20150067690 | SYSTEM AND METHOD FOR GENERATING A PLAN TO COMPLETE A TASK IN COMPUTING ENVIRONMENT - A system and method for generating a plan to complete a task by providing a framework facilitating use of heterogeneous data sources without altering a planning algorithm are disclosed. The method includes using a first dataset of logical atoms represented in predicate schema and second dataset of database atoms represented in non-predicate schema, and modifying a grammar rule, a domain definition, and a problem definition, and selecting and executing task methods and task operators to complete a task. Execution of task operator includes verifying a precondition, assigning variables with values when the precondition is valid, and modifying (delete and add) a plan state. Execution of the task method includes verifying a precondition of the task method, assigning variables with values when the precondition is valid, decomposing the task into sub-tasks, assigning arguments of task method to sub-tasks, and adding sub-tasks to a task list. Thereafter, a plan is generated. | 03-05-2015 |
20150074668 | Use of Multi-Thread Hardware For Efficient Sampling - This disclosure pertains to systems, methods, and computer readable media for utilizing an unused hardware thread of a multi-core microcontroller of a graphical processing unit (GPU) to gather sampling data of commands being executed by the GPU. The multi-core microcontroller may include two or more hardware threads and may be responsible for managing the scheduling of commands on the GPU. In one embodiment, the firmware code of the multi-core microcontroller which is responsible for running the GPU may run entirely on one hardware thread of the microcontroller, while the second hardware thread is kept in a dormant state. This second hardware thread may be used for gathering sampling data of the commands run on the GPU. The sampling data can be used to assist developers identify bottlenecks and to help them optimize their software programs. | 03-12-2015 |
20150074669 | TASK-BASED MODELING FOR PARALLEL DATA INTEGRATION - System, method, and computer program product to perform an operation for task-based modeling for parallel data integration, by determining, for a data flow, a set of processing units, each of the set of processing units defining one or more data processing operations to process the data flow, generating a set of tasks to represent the set of processing units, each task in the set of tasks comprising one or more of the data processing operations of the set of processing units, optimizing the set of tasks based on a set of characteristics of the data flow, and generating a composite execution plan based on the optimized set of tasks to process the data flow in a distributed computing environment. | 03-12-2015 |
20150082313 | APPARATUSES AND METHODS FOR GENERATING EVENT CODES INCLUDING EVENT SOURCE - Apparatuses and methods implemented therein are disclosed for generating event codes that include the source of the events that caused the generation of the event codes. In one embodiment the apparatus comprises a memory, a processor, logic element and an event generator. The memory is configured to store instructions corresponding to a scheduler and instructions corresponding to a first thread and a second thread. The processor is configured to execute instructions corresponding to the scheduler wherein the scheduler selects a one of the first or second thread wherein the processor executes instructions corresponding to the selected one of the first or second thread. The logic element is configured to receive an identifier corresponding to the selected thread and a received asynchronous event. The logic element produces a concatenated event identifier comprising the thread identifier and the received asynchronous event. | 03-19-2015 |
20150082314 | TASK PLACEMENT DEVICE, TASK PLACEMENT METHOD AND COMPUTER PROGRAM - The task placement device includes: a task set parameter acquisition section which acquires task set parameters including information indicating the dependence relationship among tasks contained in a task set, and a required execution time needed for execution of each task; a first task placement section configured to, for a task which is capable of being executed within a scheduling-anticipated period, determine core allocation, taking into consideration scheduling based on the task set parameters; and a second task placement section configured to, for a task except first task placed by the first task placement section, determine the core allocation based on the task set parameters. | 03-19-2015 |
20150082315 | DYNAMIC PROGRAM EVALUATION FOR SYSTEM ADAPTATION - A method and apparatus to maintain a plurality of executables for a task in a device are described. Each executable may be capable of performing the task in response to a change in an operating environment of the device. Each executable may be executed to perform a test run of the task. Each execution can consume an amount of power under the changed operating environment in the device. One of the executables may be selected to perform the task in the future based on the amounts of power consumed for the test runs of the task. The selected one executable may require no more power than each of remaining ones of the executables. | 03-19-2015 |
20150089506 | JOB EXTRACTION METHOD, JOB EXTRACTION DEVICE, AND JOB EXTRACTION SYSTEM - A job extraction method, includes: referring to first information indicating an input time, a start time, an execution time, and a number of computation resources to be used, for each of jobs to be executed using one of computation resources; specifying first jobs having a first waiting time and a first start time later than a second start time of a second job having a second input time earlier than the first input time; and extracting, based on second information indicating a time-sequential transition of a number of computation resources not being used, a third job for which a state where computation resources of a number being equal to or greater than the number of computation resources to be used for one of the first jobs are not being used has continued for a first execution time of the first job or longer during the first waiting time. | 03-26-2015 |
20150089507 | INFORMATION PROCESSING SYSTEM, METHOD OF CONTROLLING INFORMATION PROCESSING SYSTEM, AND RECORDING MEDIUM - An information processing system includes a plurality of information processing apparatuses, a management apparatus including a first processor, and configured to manage execution of jobs by the plurality of information processing apparatuses; and a terminal apparatus including a second processor. The first processor is configured to identify an information processing apparatus not executing a job among the plurality of information processing apparatuses, transmit information on the number of identified information processing apparatuses, and upon receiving identification information on at least one job to be executed on the information processing apparatus not executing a job from the terminal apparatus, perform scheduling so that the information processing apparatus not executing a job executes the job. | 03-26-2015 |
20150089508 | COMMUNICATION DEVICE - A communication device communicating in conformance with a prescribed communication standard includes a storage storing at least a first virtual program that includes a program implementing a first function of the communication device and a second virtual program that includes a program implementing a second function of the communication device, an executer configured to successively execute the first and second virtual programs, and a switching controller configured to read at least a part of either one of the first and second virtual programs from the storage, to store the part of either one of the first and second virtual programs into a memory of the executer, to execute the part of either one of the first and second virtual programs in the executer, after completion of the processing of the one virtual program, to delete at least a part of the one virtual program from the memory in accordance with free area in the memory, to read at least a part of the other virtual program of the first and second virtual programs from the storage, to store the part of the other virtual program into the memory of the executer, and to execute the part of the other virtual program in the executer, thereby, to switch the first and second virtual programs to be executed in the executer. | 03-26-2015 |
20150095912 | METHODS AND APPARATUS FOR CONTROLLING AFFINITY FOR EXECUTION ENTITIES - In a data processing system that is executing a parent execution entity of an application, the parent execution entity has a first affinity setting. The data processing system enables the parent execution entity to create a worker execution entity that has a second affinity setting without changing the affinity setting of the parent execution entity. Workload for the application may then be performed in parallel by the parent execution entity and the worker execution entity. In one embodiment, to create the worker execution entity with the second affinity setting, the system first creates a delegate execution entity that has the first affinity setting. The system then changes the affinity setting of the delegate execution entity to the second affinity setting. The delegate execution entity then creates the worker execution entity with the second affinity setting. Another embodiment involves a super-delegate execution entity. Other embodiments are described and claimed. | 04-02-2015 |
20150095913 | SYSTEM AND METHOD FOR HOST-ASSISTED BACKGROUND MEDIA SCAN (BMS) - Many storage devices (or drives) include a mechanism, such as a processor, to execute internal maintenance process(es) that maintain data integrity and long-term drive health. One example of such an internal maintenance process is a background media scan (BMS). However, on busy systems, the BMS may not have an opportunity to execute, which can damage long term drive performance. In one embodiment, a method includes sending a command from a host device to a storage device. The storage device can responsively run an internal maintenance process of the storage device. In one embodiment, the internal maintenance process can be an internal maintenance process such as a background media scan. | 04-02-2015 |
20150095914 | GPU DIVERGENCE BARRIER - A device includes a memory, and at least one programmable processor configured to determine, for each warp of a plurality of warps, whether a Boolean expression is true for a corresponding thread of each warp, pause execution of each warp having a corresponding thread for which the expression is true, determine a number of active threads for each of the plurality of warps for which the expression is true, sort the plurality of warps for which the expression is true based on the number of active threads in each of the plurality of warps, swap thread data of an active thread of a first warp of the plurality of warps with thread data of an inactive thread of a second warp of the plurality of warps, and resume execution of the at least one of the plurality of warps for which the expression is true. | 04-02-2015 |
20150095915 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus including a plurality of application frameworks upon which applications are executed, and a decision unit configured to control a switching of operable states of the plurality of application frameworks. | 04-02-2015 |
20150095916 | INFORMATION PROCESSING SYSTEM AND CONTROL METHOD OF INFORMATION PROCESSING SYSTEM - The job management at includes the following units. An information acquiring unit acquires information related to a job that is submitted in a predetermined time period. A weight calculating unit determines, on the basis of the information related to the job, the degree of influence for each shape of the job. A target shape determining unit determines, as pre-placement target shapes, shapes of a predetermined number of jobs in the order the degree of influence is high. A pre placement table computing unit determines, on the basis of the pre-placement target shapes and the degree of influence, pre placement of a job that is a way of placing a job to one of the computing nodes. A placement determining unit allocates, when a submitted job matches one of the pre-placement target shapes, the submitted job to the one of the computing nodes in accordance with the pre placement. | 04-02-2015 |
20150100963 | METHOD AND SYSTEM FOR EFFICIENT EXECUTION OF ORDERED AND UNORDERED TASKS IN MULTI-THREADED AND NETWORKED COMPUTING - The present disclosure provides methods for concurrently executing ordered and unordered tasks using a plurality of processing units. Certain embodiments of the present disclosure may store the ordered and unordered tasks in the same processing queue. Further, processing tasks in the processing queue may comprise concurrently preprocessing ordered tasks, thereby reducing the amount of processing unit idle time and improving load balancing across processing units. Embodiments of the present disclosure may also dynamically manage the number of processing units based on a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and/or the number of sets of related ordered tasks in the processing queue. Also provided are related systems and non-transitory computer-readable media. | 04-09-2015 |
20150100964 | APPARATUS AND METHOD FOR MANAGING MIGRATION OF TASKS BETWEEN CORES BASED ON SCHEDULING POLICY - Provided are an apparatus and method for managing migration of tasks between cores based on a scheduling policy, which can provide optimal environments utilizing multiple cores to the tasks with various characteristics. It is possible to schedule tasks in consideration of different characteristics. In particular, it is possible to continuously secure the performance of the multi-core system in an environment for operating a plurality of application programs. It is also possible to optimally utilize all cores of the multi-core system, thereby flexibly handling dynamic variation in characteristics of tasks. | 04-09-2015 |
20150106816 | PERFORMANCE MEASUREMENT OF HARDWARE ACCELERATORS - Performance measurement of hardware accelerators, where one or more computer processors are operably coupled to at least one hardware accelerator, and a computer memory is operatively coupled to the one or more computer processors, including operating by the one or more processors the accelerator at saturation, submitting data processing tasks by the processors to the accelerator at a rate that saturates the data processing resources of the accelerator, causing the accelerator to decline at least some of the submitted tasks; and while the accelerator is operating at saturation, measuring by the processors accelerator performance according to a period of time during which the accelerator accepts a plurality of submitted tasks. | 04-16-2015 |
20150106817 | MOBILE APPARATUS FOR EXECUTING SENSING FLOW FOR MOBILE CONTEXT MONITORING, METHOD OF EXECUTING SENSING FLOW USING THE SAME, METHOD OF CONTEXT MONITORING USING THE SAME AND CONTEXT MONITORING SYSTEM INCLUDING THE SAME - A mobile apparatus includes a sensing handler and a processing handler. The sensing handler includes a plurality of sensing operators. The sensing operator senses data during a sensing time corresponding to a size of C-FRAME and stops sensing during a skip time. The C-FRAME is a sequence of the sensed data to produce a context monitoring result. The processing handler includes a plurality of processing operators. The processing operator executes the sensed data of the sensing operator in a unit of F-FRAME. The F-FRAME is a sequence of the sensed data to execute a feature extraction operation. | 04-16-2015 |
20150106818 | DYNAMICALLY LOADING GRAPH-BASED COMPUTATIONS - Processing data includes: receiving units of work that each include one or more work elements, and processing a first unit of work using a first compiled dataflow graph ( | 04-16-2015 |
20150113536 | METHOD AND SYSTEM FOR REGULATION AND CONTROL OF A MULTI-CORE CENTRAL PROCESSING UNIT - A method and system for regulation and control of a multi-core CPU includes receiving an operating command associated with regulation and control of the multi-core CPU, responding to the operating command, and performing regulation and control on the CPU cores of the multi-core CPU via a bottom layer core interface according to a preset CPU regulation and control mode. Thereby, a working state of every CPU core of a multi-core CPU is regulated and controlled, processing capability of the multi-core CPU is improved, and energy and electric power are saved. | 04-23-2015 |
20150121380 | LAUNCHING AND MANAGING UNATTENDED APPLICATION PROGRAMS - Provided are techniques for launching and managing an unattended application program. The application program is launched in background mode. In response to determining that an exit command has been received, an exit command indicator is set to indicate that the exit command has been received and a notification is sent to wake up a blocked main thread of the launched application program. | 04-30-2015 |
20150121381 | CHANGE-REQUEST ANALYSIS - A method and associated systems for analyzing a change request of a project that involves an IT system, where IT system contains IT artifacts that have predefined relationships. One or more processors obtain a change request; use information contained in the change request to select an applicable decomposition agent; use information in the selected decomposition agent to decompose the change request into a set of component sub-change requests; correlate at least one of the sub-change requests with one of the IT artifacts; and display the sub-change requests. In alternate implementations, selecting the applicable decomposition agent may require additional user input. | 04-30-2015 |
20150121382 | CONCURRENCY CONTROL MECHANISMS FOR HIGHLY MULTI-THREADED SYSTEMS - A system for governing the spawning of a thread from a parent thread by an application in a processor is provided. The system includes one or more registers or memory locations that store values associated with remaining thread credits with respect to a thread, a policy passed by the thread's parent, and a plurality of policy values associated with the thread's parent. A first multiplexor module selects from the one or more registers the policy used to spawn a thread, and makes the policy available for execution. A second multiplexor module selects one or more of the policy values used in a spawn process whose policy was selected by the output of the first multiplexor module, the second multiplexor module outputs a first signal indicative of the selected policy value to accompany the selected policy, which may be given to the child thread as its initial spawn count when the policy so indicates. A third multiplexor module selects either the first signal or a null where the selected policy value of the first signal is used to update the remaining thread credits of the thread's parent. | 04-30-2015 |
20150121383 | Application Heartbeat Period Adjusting Method and Apparatus, and Terminal - Embodiments of the present invention disclose an application heartbeat period adjusting method and apparatus, and a terminal, and in the embodiments, it is determined, according to an identifier of an application, that the application is in a heartbeat adjustment blacklist. A first heartbeat period of the application is adjusted to a second heartbeat period according to a preset trigger heartbeat period. The heartbeat adjustment blacklist includes an identifier of an application on which a heartbeat period adjustment needs to be performed, the first heartbeat period of the application is an original heartbeat period of the application, the second heartbeat period is a heartbeat period, which is adjusted according to the preset trigger heartbeat period, of the application, and the preset trigger heartbeat period is an adjustment period according to which the first heartbeat period is adjusted. | 04-30-2015 |
20150121384 | COMMUNICATION TERMINAL AND COMMUNICATION CONTROL METHOD - A communication terminal has communication circuit, a processor and a storing module operable to store a plurality of application programs. The terminal includes comprising: a table that a plurality of timer times are registered, a notifying module operable to notify expiration when reaching at a timer time that is registered in the table, an executing module operable to execute at least two or more application programs when the expiration is notified by the notifying module, and an enabling module operable to enable the communication circuit when executing the application program by the executing module. Communication is performed by the at least two or more application programs while the communication circuit is being enabled. | 04-30-2015 |
20150121385 | SERVICE SCHEDULING METHOD AND APPARATUS, AND NETWORK DEVICE - A service scheduling method, including: obtaining scheduling information of multiple services deployed on a network device; generating scheduling logic according to the scheduling information, invoking, according to the generated scheduling logic, each processing module to process a packet received by the network device, and invoking, according to the scheduling point information of each service, a corresponding service at a scheduling point of each service. Accordingly, the embodiments of the present invention also provide a service scheduling apparatus and a network device. In the embodiments of the present invention, by using the foregoing technical solutions, a conventional packet processing process is segmented in detail, multiple service scheduling points are defined, and a required service is flexibly scheduled according to a packet processing result, which avoids repeated scheduling, improves flexibility and performance of service scheduling, and increases competitiveness of a network device. | 04-30-2015 |
20150128142 | VIRTUAL RETRY QUEUE - A starvation mode is entered and a particular dependency of a first request in a retry queue is identified. The particular dependency is determined to be acquired and the first request is retried based on acquisition of the particular dependency. | 05-07-2015 |
20150128143 | REALIZING JUMPS IN AN EXECUTING PROCESS INSTANCE - A method for realizing jumps in an executing process instance can be provided. The method can include suspending an executing process instance, determining a current wavefront for the process instance and computing both a positive wavefront difference for a jump target relative to the current wavefront and also a negative wavefront difference for the jump target relative to the current wavefront. The method also can include removing activities from consideration in the process instance and also adding activities for consideration in the process instance both according to the computed positive wavefront difference and the negative wavefront difference, creating missing links for the added activities, and resuming executing of the process instance at the jump target. | 05-07-2015 |
20150135182 | SYSTEM AND METHOD OF DATA PROCESSING - A data processing apparatus, a data processing method and a computer program product are disclosed. In an embodiment, the data processing apparatus comprises: a processor comprising a plurality of parallel lanes for parallel processing of sets of threads, each lane comprising a plurality of pipelined stages, the pipelined stages of each lane being operable to process instructions from the sets of threads; and scheduling logic operable to schedule instructions for processing by the lanes, the scheduling logic being operable to identify that one of the sets of threads being processed is to be split into a plurality of sub-sets of threads and to schedule at least two of the plurality of sub-sets of threads for processing by different pipelined stages concurrently. | 05-14-2015 |
20150143377 | DYNAMIC SCHEDULING OF TASKS FOR COLLECTING AND PROCESSING DATA USING JOB CONFIGURATION DATA - A scheduler manages execution of a plurality of data-collection jobs, assigns individual jobs to specific forwarders in a set of forwarders, and generates and transmits tokens (e.g., pairs of data-collection tasks and target sources) to assigned forwarders. The forwarder uses the tokens, along with stored information applicable across jobs, to collect data from the target source and forward it onto an indexer for processing. For example, the indexer can then break a data stream into discrete events, extract a timestamp from each event and index (e.g., store) the event based on the timestamp. The scheduler can monitor forwarders' job performance, such that it can use the performance to influence subsequent job assignments. Thus, data-collection jobs can be efficiently assigned to and executed by a group of forwarders, where the group can potentially be diverse and dynamic in size. | 05-21-2015 |
20150150011 | Self-splitting of workload in parallel computation - In a method for distributing execution of a problem to a plurality of K (wherein K≧2) workers, a pair of identifiers (k, K) is transmitted to each worker, wherein k uniquely identifies each worker and wherein K indicates the total number of workers. Each worker applies a first rule deterministically and autonomously without communicating between the workers. The first rule is the same for each worker. The first rule splits the problem in m parts, wherein m≧K. Each worker applies a second rule deterministically and autonomously without communicating between the workers. The second rule assigns each of the m parts to one of the K workers. The second rule is the same for each worker. Each worker processes exactly the parts that have been assigned thereto, thereby generating a unit of output. Each of the units of output from each worker is merged. | 05-28-2015 |
20150150012 | CROSS-PLATFORM WORKLOAD PROCESSING - According to one aspect of the present disclosure, a system and technique for workload processing includes a host having a processor unit and a memory. A scheduler is executable by the processor unit to: receive a request to process a workload; access historical processing data to determine execution statistics associated with previous processing requests; determine whether the data of the workload is available for processing; in response to determining that the data is available for processing, determine whether a process for the workload is available; in response to determining that the process is available, determine resource availability on a computing platform for processing the workload; determine whether excess capacity is available on the computing platform based on the resource availability and the execution statistics; and in response to determining that excess capacity exists on the computing platform, initiate processing of the workload on the computing platform. | 05-28-2015 |
20150150013 | REDUCING JOB CREDENTIALS MANAGEMENT LOAD - A method, system, and computer program product for reducing job credentials management load are provided in the illustrative embodiments. A determination is made whether a credential data submitted with a job matches a second credential data stored in a repository, the credential data comprising a set of attributes. Responsive to the credential data matching the second credential data, a reference to the second credential data is associated with the job. The second credential data is updated to enable the job for execution. The job is forwarded with the reference to a receiver application, wherein the reference provides the receiver application an authorization to execute the job. | 05-28-2015 |
20150150014 | Associating a Task Completion Step of a Task with a Related Task - Methods and apparatus related to associating a task completion step with one or more tasks. A task group is determined based on similarity between the tasks of the task group, a task completion step of one of the tasks of the task group is identified, and one or more of the other tasks of the task group are associated with the task completion step. In some implementations, the task group is determined based on similarity between entities that are associated with the tasks of the task group. In some implementations, the task group is determined based on textual representations that are associated with the tasks of the task group. | 05-28-2015 |
20150293784 | Model Driven Optimization of Annotator Execution in Question Answering System - Mechanisms are provided for scheduling execution of pre-execution operations of an annotator of a question and answer (QA) system pipeline. A model is used to represent a system of annotators of the QA system pipeline, where the model represents each annotator as a node having one or more performance parameters indicating a performance of an execution of an annotator corresponding to the node. For each annotator in a set of annotators of the system of annotators, an effective response time for the annotator is calculated based on the performance parameters. A pre-execution start interval for a first annotator based on an effective response time of a second annotator is calculated where execution of the first annotator is sequentially after execution of the second annotator. Execution of pre-execution operations associated with the first annotator is scheduled based on the calculated pre-execution start interval for the first annotator. | 10-15-2015 |
20150293785 | PROCESSING ACCELERATOR WITH QUEUE THREADS AND METHODS THEREFOR - Techniques related to a processing accelerator with queue threads are described herein. | 10-15-2015 |
20150293787 | Method For Scheduling With Deadline Constraints, In Particular In Linux, Carried Out In User Space - A method for scheduling tasks with deadline constraints, based on a model of independent periodic tasks and carried out in the user space by means of API POSIX is provided. | 10-15-2015 |
20150293788 | Scheduling of Global Voltage/Frequency Scaling Switches Among Asynchronous Dataflow Dependent Processors - Task execution among a plurality of processors that are configured to operate concurrently at a same global Voltage/Frequency (VF) level is controlled by using a global power manager to control VF switching from one VF level to another VF level, the same current VF level governing VF settings of each processor. Each of the processors controls whether it will wait for a VF switch from a current VF level to a next VF level prior to enabling execution of a next scheduled task for the one of the processors, with the decision being based on whether a current VF level is higher than the next scheduled VF level. The global power manager performs VF level switching at least based on a timing schedule, and in some but not all embodiments, also on whether all processors indicate that they are waiting for a VF level switch. | 10-15-2015 |
20150293795 | METHOD OF SOA PERFORMANCE TUNING - Systems and methods of SOA performance tuning are provided. In accordance with an embodiment, one such method can comprise monitoring a plurality of processing stages, calculating a processing speed for each of the processing stages, and tuning a slowest processing stage of the plurality of processing stages. | 10-15-2015 |
20150301835 | DECOUPLING BACKGROUND WORK AND FOREGROUND WORK - Systems, methods, and apparatus for separately loading and managing foreground work and background work of an application. In some embodiments, a method is provided for use by an operating system executing on at least one computer. The operating system may identify at least one foreground component and at least one background component of an application, and may load the at least one foreground component for execution separately from the at least one background component. For example, the operating system may execute the at least one foreground component without executing the at least one background component. In some further embodiments, the operating system may use a specification associated with the application to identify at least one piece of computer executable code implementing the at least one background component. | 10-22-2015 |
20150301854 | APPARATUS AND METHOD FOR HARDWARE-BASED TASK SCHEDULING - Provided are a method and apparatus for task scheduling based on hardware. The method for task scheduling in a scheduler accelerator based on hardware includes: managing task related information based on tasks in a system; updating the task related information in response to a request from a CPU; selecting a candidate task to be run next after a currently running task for each CPU on the basis of the updated task related information; and providing the selected candidate task to each CPU. The scheduler accelerator supports the method for task scheduling based on hardware. | 10-22-2015 |
20150301855 | PREFERENTIAL CPU UTILIZATION FOR TASKS - In a distributed server storage environment, a set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold. | 10-22-2015 |
20150301861 | INTEGRATED MONITORING AND CONTROL OF PROCESSING ENVIRONMENT - A method of managing components in a processing environment is provided. The method includes monitoring (i) a status of each of one or more computing devices, (ii) a status of each of one or more applications, each application hosted by at least one of the computing devices, and (iii) a status of each of one or more jobs, each job associated with at least one of the applications; determining that one of the status of one of the computing devices, the status of one of the applications, and the status of one of the jobs is indicative of a performance issue associated with the corresponding computing device, application, or job, the determination being made based on a comparison of a performance of the computing device, application, or job and at least one predetermined criterion; and enabling an action to be performed associated with the performance issue. | 10-22-2015 |
20150301872 | PROCESS COOPERATION METHOD, PROCESS COOPERATION PROGRAM, AND PROCESS COOPERATION SYSTEM - A process cooperation method includes storing in a first storage device a first process result as a result of execution of a first process by a first processor and transmitting the first process result to a second processor, storing in a second storage device a second process result as a result of execution of a second process by the second processor based on the first process result received from the first processor, and transmitting the second process result to a third processor, and moreover transmitting the second process result and an identifier identifying the third processor to the first processor, and storing in the first storage device the second process result and the identifier received from the second processor by the first processor in association with the first process result. | 10-22-2015 |
20150309838 | REDUCTION OF PROCESSING DUPLICATES OF QUEUED REQUESTS - Aspects of the present invention disclose a method, computer program product, and system for managing queued requests. The method includes one or more processors accessing a queue that includes a plurality of read requests. The method further includes one or more processors identifying read requests in the plurality of read requests that are identical. The method further includes one or more processors determining whether grouping the identical read requests is an efficient use of one or more resources. In an additional aspect, the method further includes responsive to determining that grouping the identical read requests is an efficient use of one or more resources, one or more processors grouping the identical read requests together for processing as a single request. | 10-29-2015 |
20150317184 | Systems and Methods For Processing Drilling Data - Systems and methods for processing drilling data. One embodiment provides a method comprising building user-designed contexts (which can be designated as built-in contexts) for drilling structures. The method also comprises orchestrating module execution within the user-designed contexts. The method further comprises providing data from the user-designed contexts to such modules via an interface. Some methods include monitoring drilling data to detect events (for instance departure from a pseudolog) and orchestrating module execution responsive thereto. The method can include exposing the orchestration of the execution of the module instances as a service. Moreover, some embodiments provide extra-contextual application program interfaces. In addition, or in the alternative, some embodiments schedule the orchestration of the modules based on declarations related to the inputs and/or outputs of the modules. | 11-05-2015 |
20150324221 | TECHNIQUES TO MANAGE VIRTUAL CLASSES FOR STATISTICAL TESTS - Techniques to manage virtual classes for statistical tests are described. An apparatus may comprise a simulated data component to generate simulated data for a statistical test, statistics of the statistical test based on parameter vectors to follow a probability distribution, a statistic simulator component to simulate statistics for the parameter vectors from the simulated data with a distributed computing system comprising multiple nodes each having one or more processors capable of executing multiple threads, the simulation to occur by distribution of portions of the simulated data across the multiple nodes of the distributed computing system, and a distributed control engine to control task execution on the distributed portions of the simulated data on each node of the distributed computing system with a virtual software class arranged to coordinate task and sub-task operations across the nodes of the distributed computing system. Other embodiments are described and claimed. | 11-12-2015 |
20150324230 | MULTI-RESOURCE TASK SCHEDULING METHOD - A multi-resource task scheduling method includes: classifying concurrency packets to distinguish packets with deadline and packets without deadline; ranking packets with deadline using EDF algorithm and ranking packets without deadline using SJF algorithm; estimating a virtual start time and a virtual completion time according to ranking results; determining whether packets with deadline can be scheduled successfully; if yes, determining whether there is a packet without deadline, which can be arranged to be scheduled before the packets with deadline and can shorten average completion time, existing in the packets without deadline; and if yes, scheduling the packet without deadline, which can be arranged to be scheduled before the packets with deadline, in advance. The method can shorten the average completion time of all tasks greatly under multi-resource circumstance. | 11-12-2015 |
20150331710 | USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES ASSOCIATED WITH UNITS OF WORK TO SELECT THE UNITS OF WORK TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values associated with units of work to select the units of work to process. A plurality of queues for each of a plurality of attribute types of attributes are associated with the units of work to process, wherein there are queues for different possible attribute values for each of the attribute types. A unit of work to process is received. A determination is made for each of the attribute types at least one of the queues corresponding to at least one attribute value for the attribute type associated with the received unit of work. A record for the received unit of work is added to each of the determined queues. | 11-19-2015 |
20150331712 | CONCURRENTLY PROCESSING PARTS OF CELLS OF A DATA STRUCTURE WITH MULTIPLE PROCESSES - Provided are a computer program product, system, and method for concurrently processing parts of cells of a data structure with multiple processes. Information is provided to indicate a partitioning of the cells of the data structure into a plurality of parts, and having a cursor pointing to a cell in the part. Processes concurrently process different parts of the data structure by performing: determining from the cursor for the part one of the cells in the part to process; processing the cells from the cursor to determine whether to process the unit of work corresponding to the cell; and setting the cursor to identify one of the cells from which processing is to continue in a subsequent iteration in response to processing the units of work for a plurality of the processed cells. | 11-19-2015 |
20150339154 | FRAMEWORK FOR AUTHORING DATA LOADERS AND DATA SAVERS - Implementing static loaders and savers for the transfer of local and distributed data containers to and from storage systems can be difficult because there are so many different configurations of output formats, data containers and storage systems. Described herein is an extensible componentized data transfer framework for performant and scalable authoring of data loaders and data savers. Abstracted local and distributed workflows drive selection of plug-ins that can be composed by the framework into particular local or distributed scenario loaders and savers. Reusability and code sparsity are maximized. | 11-26-2015 |
20150339160 | CONTINUOUS OPTIMIZATION OF ARCHIVE MANAGEMENT SCHEDULING BY USE OF INTEGRATED CONTENT-RESOURCE ANALYTIC MODEL - A method and associated system for continuously optimizing data archive management scheduling. A flow network is modeled, which creates vertexes organized in multiple levels and creating multiple edges sequentially connecting the vertexes of the multiple levels. The multiple levels consist of N+1 levels denoted as LEVEL | 11-26-2015 |
20150339173 | HARDWARE SYNCHRONIZATION BARRIER BETWEEN PROCESSING UNITS - A method for synchronizing multiple processing units, comprises the steps of configuring a synchronization register in a target processing unit so that its content is overwritten only by bits that are set in words written in the synchronization register; assigning a distinct bit position of the synchronization register to each processing unit; and executing a program thread in each processing unit. When the program thread of a current processing unit reaches a synchronization point, the method comprises writing in the synchronization register of the target processing unit a word in which the bit position assigned to the current processing unit is set, and suspending the program thread. When all the bits assigned to the processing units are set in the synchronization register, the suspended program threads are resumed. | 11-26-2015 |
20150347131 | FAST TRANSITIONS FOR MASSIVELY PARALLEL COMPUTING APPLICATIONS - Embodiments relate to facilitating quick and graceful transitions for massively parallel computing applications. A computer-implemented method for facilitating termination of a plurality of threads of a process is provided. The method maintains information about open communications between one or more of the threads of the process and one or more of other processes. In response to receiving a command to terminate one or more of the threads of the process, the method completes the open communications on behalf of the threads after terminating the threads. | 12-03-2015 |
20150347181 | RESOURCE MANAGEMENT WITH DYNAMIC RESOURCE POLICIES - A method and apparatus of a device for resource management by using a hierarchy of resource management techniques with dynamic resource policies is described. The device terminates several misbehaving application programs when available memory on the device is running low. Each of those misbehaving application programs consumes more memory space than a memory consumption limit assigned to the application program. If available memory on the device is still low after terminating those misbehaving application programs, the device further sends memory pressure notifications to all application programs. If available memory on the device is still running low after sending the memory pressure notifications, the device further terminates background, idle, and suspended application programs. The device further terminates foreground application programs when available memory on the device is still low after terminating the background, idle, and suspended application programs. | 12-03-2015 |
20150347182 | COMPUTER PRODUCT, EXECUTION-FLOW-CREATION AIDING APPARATUS, AND EXECUTION-FLOW-CREATION AIDING METHOD - An execution-flow-creation aiding apparatus obtains a written operation procedure that represents an operation procedure of a series of tasks involved in operation work. The execution-flow-creation aiding apparatus refers to component information and based on a result of comparing the task name of a task included among the series of tasks involved in operation work and the component name of a component, associates the task and the component. The execution-flow-creation aiding apparatus selects among the series of tasks, a second task that is not associated with a component and that is immediately before or after a first task associated with a given component. The execution-flow-creation aiding apparatus refers to the component information and based on a result of comparing the task name of the selected second task and the variable name of a variable provided when the given component is executed, associates the second task with the given component. | 12-03-2015 |
20150347185 | EXPLICIT BARRIER SCHEDULING MECHANISM FOR PIPELINING OF STREAM PROCESSING ALGORITHMS - A method for pipelined data stream processing of packets includes determining a task to be performed on each packet of a data stream, the task having a plurality of task portions including a first task portion. Determining the first task portion is to process a first packet. In response to determining a first storage location stores a first barrier indicator, enabling the first task portion to process the first packet and storing a second barrier indicator at the first location. Determining the first task portion is to process a second next-in-order packet. In response to determining the first location stores the second barrier indicator, preventing the first task portion from processing the second packet. In response to a first barrier clear indicator, storing the first barrier indicator at the first location, and in response, enabling the first task portion to process the second packet. | 12-03-2015 |
20150347187 | DECENTRALIZED PROCESSING OF WORKER THREADS - One or more techniques and/or systems are provided for managing one or more worker threads. For example, a utility list queue may be populated with a set of work item entries for execution. A set of worker threads may be initialized to execute work item entries within the utility list queue. In an example, a worker thread may be instructed to operate in a decentralized manner, such as without guidance from a timer manager thread. The worker thread may be instructed to execute work item entries that are not assigned to other worker threads and that are expired (e.g., ready for execution). The worker thread may transition into a sleep state if the utility list queue does not comprise at least one work item entry that is unassigned and expired. | 12-03-2015 |
20150355939 | INVERSION OF CONTROL FOR EXECUTABLE EXTENSIONS IN A RUN-TIME ENVIRONMENT - A system method and non-transitory computer readable medium implemented as programming on a suitable computing device, the system for inversion of control of executable extensions including a run-time environment configured to push data to one or a plurality of extensions, wherein said one or plurality of extensions are configured to comprise one or a plurality of signatures. Wherein said one or a plurality of extensions are compilable, designable and testable outside of the run-time environment, and wherein the run-time environment may be configured to accept an extension and to push data to that extension as per said one or a plurality of signatures. | 12-10-2015 |
20150355941 | INFORMATION PROCESSING DEVICE AND METHOD FOR CONTROLLING INFORMATION PROCESSING DEVICE - An information processing device includes arithmetic processing devices, a cooling device, and a job assignment device. Each of the arithmetic processing devices is configured to perform a job. The cooling device is connected to the arithmetic processing devices. The cooling device includes a circulation unit, a cooling unit, and an adjustment unit. The circulation unit is configured to circulate refrigerant through a supply route. The refrigerant absorbs heat generated by the arithmetic processing devices. The cooling unit is configured to cool the refrigerant circulated by the circulation unit. The adjustment unit is configured to adjust, in response to a temperature of the refrigerant, a cooling capacity of the cooling unit to cool the refrigerant. The job assignment device includes a processor configured to control, on the basis of cooling capacity information indicating the cooling capacity, job charging to the arithmetic processing devices. | 12-10-2015 |
20150370597 | INFERRING PERIODS OF NON-USE OF A WEARABLE DEVICE - A wearable computing device is described that predicts, based on movement detected, over time, by the wearable computing device, one or more future periods of time during which the wearable computing device will not be used. Responsive to determining that the wearable computing device is not being used at a current time, the wearable computing device determines whether the current time coincides with at least one period of time from the one or more future periods of time. Responsive to determining that the current time coincides with the at least one period of time, the wearable computing device performs an operation. | 12-24-2015 |
20150370599 | PROCESSING TASKS IN A DISTRIBUTED SYSTEM - Embodiments of the present application relate to a method, apparatus, and system for processing a task in a distributed system. The method includes, in response to being triggered to start a task and before processing the task, determining, by a task processor in a distributed system of a plurality of task processors, a vital status of the task. In the event that the vital status of the task is set to alive, determining not to process the task, and in the event that the vital status of the task is set to dead, updating the vital status of the task so as to be set to alive, processing the task, and in response to completing the processing of the task, updating the vital status of the task to dead. | 12-24-2015 |
20150370602 | Time Critical Tasks Scheduling - A method and system for scheduling a time critical task. The system may include a processing unit, a hardware assist scheduler, and a memory coupled to both the processing unit and the hardware assist scheduler. The method may include receiving timing information for executing the time critical task, the time critical task executing program instructions via a thread on a core of a processing unit and scheduling the time critical task based on the received timing information. The method may further include programming a lateness timer, waiting for a wakeup time to obtain and notifying the processing unit of the scheduling. Additionally, the method may include executing, on the core of the processing unit, the time critical task in accordance with the scheduling, monitoring the lateness timer, and asserting a thread execution interrupt in response to the lateness timer expiring, thereby suspending execution of the time critical task. | 12-24-2015 |
20150370874 | In-Application File Conversion using Cloud Services - In-application file conversion using cloud services is described. In one or more embodiments, an application determines that a file includes features inserted by a subsequent version of the application. The application sends a request to a conversion service to convert the file to a format that is compatible the application. The application receives a converted file from the conversion service that is compatible with the application. The conversion service has multiple versions of application server software to convert files, a job queue to store requested conversion jobs, and a job manager that determines which version of application server software to use to convert the file and invokes an instance of the determined version of the application server software to convert the file. | 12-24-2015 |
20150378784 | WORK FLOW LEVEL JOB INPUT/OUTPUT - Work flows consist of the following steps: (i) receiving a work flow data set that defines a work flow which includes a plurality of work items; and (ii) defining, a centralized and pattern-based work flow level job input/output (I/O) characteristic set that includes at least I/O settings for work items included in the work flow. | 12-31-2015 |
20160004561 | Model Driven Optimization of Annotator Execution in Question Answering System - Mechanisms are provided for scheduling execution of pre-execution operations of an annotator of a question and answer (QA) system pipeline. A model is used to represent a system of annotators of the QA system pipeline, where the model represents each annotator as a node having one or more performance parameters indicating a performance of an execution of an annotator corresponding to the node. For each annotator in a set of annotators of the system of annotators, an effective response time for the annotator is calculated based on the performance parameters. A pre-execution start interval for a first annotator based on an effective response time of a second annotator is calculated where execution of the first annotator is sequentially after execution of the second annotator. Execution of pre-execution operations associated with the first annotator is scheduled based on the calculated pre-execution start interval for the first annotator. | 01-07-2016 |
20160004562 | Method of Centralized Planning of Tasks to be Executed by Computers Satisfying Certain Qualitative Criteria Within a Distributed Set of Computers - A method of disseminating a planning of tasks in a network of distributed computers. The network includes a planning server, the programming on the planning server, of a planning of tasks for at least one class of distributed computers; and, independently the defining of ranges of transfer to the distributed computers of information allocated to each distributed computer, the transfer range being defined as a function of the constraints of the network; the splitting of the planned tasks into scheduling information, for each distributed computer and for a period of time dependent on the transfer ranges defined, the scheduling information being generated as a function of the class or classes to which the computer belongs; the transfer to the distributed computers of the scheduling information while complying with the transfer ranges defined. | 01-07-2016 |
20160004563 | MANAGING NODES IN A HIGH-PERFORMANCE COMPUTING SYSTEM USING A NODE REGISTRAR - A method of managing nodes in a high-performance computing (HPC) system, which includes a management subsystem and a job scheduler subsystem, includes providing a node registrar subsystem. Logical node management functions are performed with the node registrar subsystem. Other management functions are performed with the management subsystem using the node registrar subsystem. Job scheduling functions are performed with the job scheduler subsystem using the node registrar subsystem. | 01-07-2016 |
20160004565 | System and Method for Implementing Workflow Management Using Messaging - A system provides workflow management functions over a messaging or data protocol. A workflow management object defining functions and values and events for sending and receiving workflow management data is defined on a first device and transmitted to a second device. On the second device the workflow is rendered for interaction and response, and an interaction with the workflow object is captured. A captured or generated response is transmitted back to the first device or intermediary system via the messaging protocol. The response to the workflow object (e.g. an event) may be used by the device or intermediary systems to update a status of a workflow such as hosted by a remote server system. Events detected by a workflow system may invoke processing of subsequent workflow objects in a chain such that a complex workflow may be processed over the messaging protocol. | 01-07-2016 |
20160011902 | TASK ASSOCIATION ANALYSIS IN APPLICATION MAINTENANCE SERVICE DELIVERY | 01-14-2016 |
20160011904 | INTELLIGENT APPLICATION BACK STACK MANAGEMENT | 01-14-2016 |
20160011905 | COMPOSING AND EXECUTING WORKFLOWS MADE UP OF FUNCTIONAL PLUGGABLE BUILDING BLOCKS | 01-14-2016 |
20160011915 | Systems and Methods for Safely Subscribing to Locks Using Hardware Extensions | 01-14-2016 |
20160019089 | METHOD AND SYSTEM FOR SCHEDULING COMPUTING - Provided is a method and system for scheduling computing so as to meet the quality of service (QoS) expected in a system by identifying the operation characteristic of an application in real time and enabling all nodes in the system to dynamically change the schedulers thereof organically between each other. The scheduling method includes: detecting an event of requesting a scheduler change; selecting a scheduler corresponding to the event among schedulers; and changing a scheduler of a node, which schedules use of the control unit, to the selected scheduler, without rebooting the node. | 01-21-2016 |
20160019090 | DATA PROCESSING CONTROL METHOD, COMPUTER-READABLE RECORDING MEDIUM, AND DATA PROCESSING CONTROL DEVICE - A data processing control device performs a MapReduce process. When the data processing control device assigns input data to first Reduce tasks and a second Reduce task performed by using a result of Map processes, the data processing control device assigns input data with smaller amount than any of amounts of the input data which is assigned to the first Reduce tasks to the second Reduce task. The data processing control device assigns the first Reduce tasks and the second Reduce task, to which input data is assigned, to a server that performs Reduce processes in the MapReduce process such that the second Reduce task is started after the assignment of all of the first Reduce tasks. | 01-21-2016 |
20160019093 | SYSTEM AND METHOD TO CONTROL HEAT DISSIPATION THROUGH SERVICE LEVEL ANALYSIS - The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device. | 01-21-2016 |
20160026553 | COMPUTER WORKLOAD MANAGER - A computer-implemented method includes: scheduling computing jobs; processing data by executing the computing jobs; arranging the data in a file system; managing the arranging the data by monitoring a performance parameter of the file system and extracting information about the scheduling, and tuning one of the arranging and the scheduling based on the performance parameter and the information about the scheduling. An article of manufacture includes a computer-readable medium storing signals representing instructions for a computer program executing the method. | 01-28-2016 |
20160041841 | REALIZING JUMPS IN AN EXECUTING PROCESS INSTANCE - A method for realizing jumps in an executing process instance can be provided. The method can include suspending an executing process instance, determining a current wavefront for the process instance and computing both a positive wavefront difference for a jump target relative to the current wavefront and also a negative wavefront difference for the jump target relative to the current wavefront. The method also can include removing activities from consideration in the process instance and also adding activities for consideration in the process instance both according to the computed positive wavefront difference and the negative wavefront difference, creating missing links for the added activities, and resuming executing of the process instance at the jump target. | 02-11-2016 |
20160041842 | DYNAMIC RECONFIGURATION OF APPLICATIONS ON A MULTI-PROCESSOR EMBEDDED SYSTEM - A multiprocessor system and method for swapping applications executing on the multiprocessor system are disclosed. The plurality of applications may include a first application and a plurality of other applications. The first application may be dynamically swapped with a second application. The swapping may be performed without stopping the plurality of other applications. The plurality of other applications may continue to execute during the swapping to perform a real-time operation and process real-time data. After the swapping, the plurality of other applications may continue to execute with the second application, and at least a subset of the plurality of other applications may communicate with the second application to perform the real time operation and process the real time data. | 02-11-2016 |
20160041853 | TRACKING SOURCE AVAILABILITY FOR INSTRUCTIONS IN A SCHEDULER INSTRUCTION QUEUE - A processor includes an execution unit to execute instructions and a scheduler unit to store a queue of instructions for execution by the execution unit. The scheduler unit includes a wake array including a plurality of source slots to store source identifiers for sources associated with the instructions, a picker to schedule a particular instruction for execution in the execution unit, broadcast a destination identifier associated with the particular instruction to a first subset of the source slots, and a delay element to receive the destination identifier broadcast by the picker and communicate a delayed version of the destination identifier to a second subset of the source slots different from the first subset. | 02-11-2016 |
20160048416 | APPARATUS AND METHOD FOR CONTROLLING EXECUTION OF A SINGLE THREAD BY MULTIPLE PROCESSORS - An apparatus includes a plurality of processors and a holder unit. The plurality of processors execute a task as a unit of processing by dividing the task into multiple threads including single and parallel threads, where the single thread is executed by only one of the plurality of processors whose respective pieces of processing have reached the thread, and the parallel thread is executed in parallel with another parallel thread by the plurality of processors. The holder unit is configured to held information to be shared by the plurality of processors. Each processor executes one of the multiple threads at a time, and causes the holder unit to hold reaching-state information indicating an extent to which the multiple threads executed by the plurality of processors have reached the single thread. Each processor determines whether to execute the single thread, based on the reaching-state information held in the holder unit. | 02-18-2016 |
20160062794 | BIG DATA PARSER - Computer-readable media include computer-readable instructions. The computer readable instructions include a class definition for a first object and a class definition for a second object. The first object includes a buffer for storing first information that identifies fields; a first function for storing the first information in the buffer; and a second function for extracting values of the fields, identified by the first information stored in the buffer, from a portion of a log. The second object includes a third function for obtaining configuration information from a configuration file, wherein the configuration information includes the first information; storing the configuration information at a first memory location; and performing a process. | 03-03-2016 |
20160062799 | MANAGING INVOCATION OF TASKS - A graph-based program specification includes components, at least one having at least one input port for receiving a collection of data elements, or at least one collection type output port for providing a collection of data elements. Executing a program specified by the graph-based program specification at a computing node, includes: receiving data elements of a first collection into a first storage in a first order via a link connected to a collection type output port of a first component and an input port of a second component, and invoking a plurality of instances of a task corresponding to the second component to process data elements of the first collection, including retrieving the data elements from the first storage in a second order, without blocking invocation of any of the instances until after any particular instance completes processing one or more data elements. | 03-03-2016 |
20160077871 | PREDICTIVE MANAGEMENT OF HETEROGENEOUS PROCESSING SYSTEMS - A heterogeneous processing device includes one or more relatively large processing units and one or more relatively small processing units. The heterogeneous processing device selectively activates a large processing unit or a small processing unit to run a process thread based on a predicted duration of an active state of the process thread. | 03-17-2016 |
20160077872 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - A data processing apparatus and a data processing method are provided. The apparatus includes M protocol stacks and at least one distribution service module, and the M protocol stacks separately run on different logic cores of a processor and are configured to independently perform protocol processing on a data packet to be processed. The distribution service module receives an input data packet from a network interface and sends the data packet to one of the M protocol stacks for protocol processing, and receives data packets processed by the M protocol stacks and sends the data packets outwards through the network interface. The present disclosure implements a function of parallel protocol processing by multiple processes in user space of an operating system in a multi-core environment by using a parallel processing feature of a multi-core system, thereby reducing resource consumption caused by data packet copying. | 03-17-2016 |
20160077873 | EFFICIENT PACKET FORWARDING USING CYBER-SECURITY AWARE POLICIES - For balancing load, a forwarder can selectively direct data from the forwarder to a processor according to a loading parameter. The selective direction includes forwarding the data to the processor for processing, transforming and/or forwarding the data to another node, and dropping the data. The forwarder can also adjust the loading parameter based on, at least in part, feedback received from the processor. One or more processing elements can store values associated with one or more flows into a structure without locking the structure. The stored values can be used to determine how to direct the flows, e.g., whether to process a flow or to drop it. The structure can be used within an information channel providing feedback to a processor. | 03-17-2016 |
20160077874 | Method and System for Efficient Execution of Ordered and Unordered Tasks in Multi-Threaded and Networked Computing - The present disclosure provides methods for concurrently executing ordered and unordered tasks using a plurality of processing units. Certain embodiments of the present disclosure may store the ordered and unordered tasks in the same processing queue. Further, processing tasks in the processing queue may comprise concurrently preprocessing ordered tasks, thereby reducing the amount of processing unit idle time and improving load balancing across processing units. Embodiments of the present disclosure may also dynamically manage the number of processing units based on a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and/or the number of sets of related ordered tasks in the processing queue. Also provided are related systems and non-transitory computer-readable media. | 03-17-2016 |
20160077890 | INFORMATION PROCESSING DEVICE AND BARRIER SYNCHRONIZATION METHOD - An information processing device includes a plurality of barrier banks, and one or more processors including at least one of the plurality of barrier banks. Each of barrier banks includes one or more hardware threads and a barrier synchronization mechanism. The barrier synchronization mechanism includes a bottom unit having a barrier state, and a bitmap indicating that each of the one or more hardware threads has arrived at a synchronization point, and a top unit having a non-arrival counter indicating the number of barrier banks yet to be synchronized. The bottom unit notifies of bottom unit synchronization completion when all the one or more hardware threads have arrived at a barrier synchronization point. The non-arrival counter decrements its value by 1 upon receipt of the bottom unit synchronization completion, and the top unit sets the barrier state to a value indicating synchronization completion when the non-arrival counter decrements to 0. | 03-17-2016 |
20160085584 | DISTRIBUTED ACTIVITY CONTROL SYSTEMS AND METHODS - A dynamic, distributed directed activity network comprising a directed activity control program specifying tasks to be executed including required individual task inputs and outputs, the required order of task execution, and permitted parallelism in task execution; a plurality of task execution agents, individual of said agents having a set of dynamically changing agent attributes and capable of executing different required tasks in said activity control; a plurality of task execution controllers, each controller associated with one or more of the task execution agents with access to dynamically changing agent attributes; a directed activity controller for communicating with said task execution controllers for directing execution of said activity control program; a communications network capable of supporting communication between said directed activity controller and task execution controllers; and wherein said directed activity controller and task execution controllers communicate via said communication network to execute said directed activity control program using selected task execution agents. | 03-24-2016 |
20160085601 | TRANSPARENT USER MODE SCHEDULING ON TRADITIONAL THREADING SYSTEMS - Embodiments for performing cooperative user mode scheduling between user mode schedulable (UMS) threads and primary threads are disclosed. In accordance with one embodiment, privileged hardware states are transferred from a kernel portion of a UMS thread to a kernel portion of a primary thread. | 03-24-2016 |
20160092262 | AUTOMATED CREATION OF EXECUTABLE WORKFLOW - A computing device receives information describing one or more workflow components. The computing device determines whether at least one executable step can be determined for each of the one or more workflow components. The computing device provides an indication of whether at least one executable step can be determined for each of the one or more workflow components. | 03-31-2016 |
20160092263 | SYSTEM AND METHOD FOR SUPPORTING DYNAMIC THREAD POOL SIZING IN A DISTRIBUTED DATA GRID - A system and method supports dynamic thread pool sizing suitable for use in multi-threaded processing environment such as a distributed data grid. Dynamic thread pool resizing utilizes measurements of thread pool throughput and worker thread utilization in combination with analysis of the efficacy of prior thread pool resizing actions to determine whether to add or remove worker threads from a thread pool in a current resizing action. Furthermore, the dynamic thread pool resizing system and method can accelerate or decelerate the iterative resizing analysis and the rate of worker thread addition and removal depending on the needs of the system. Optimizations are incorporated to prevent settling on a local maximum throughput. The dynamic thread pool sizing/resizing system and method thereby provides rapid and responsive adjustment of thread pool size in response to changes in work load and processor availability. | 03-31-2016 |
20160092264 | POST-RETURN ASYNCHRONOUS CODE EXECUTION - A method, system, and computer program product for the prioritization of code execution. The method includes accessing a thread in a context containing a set of code instances stored in memory; identifying sections of the set of code instances that correspond to deferrable code tasks; executing the thread in the context; determining that the thread is idle; and executing at least one of the deferrable code tasks. The deferrable code task is executed within the context and in response to determining that the thread is idle. | 03-31-2016 |
20160092265 | Systems and Methods for Utilizing Futures for Constructing Scalable Shared Data Structures - A multithreaded application that includes operations on a shared data structure may exploit futures to improve performance. For each operation that targets the shared data structure, a thread of the application may create a future and store it in a thread-local list of futures (under weak or medium futures linearizability policies) or in a shared queue of futures (under strong futures linearizability policies). Prior to a thread evaluating a future, type-specific optimizations may be performed on the list or queue of pending futures. For example, futures may be sorted temporally or by key, or multiple operations indicated in the futures may be combined or eliminated. During an evaluation of a future, a thread may compute the results of the operations indicated in one or more other futures. The order in which operations take effect and the optimization operations performed may be dependent on the futures linearizability policy. | 03-31-2016 |
20160092269 | TUNABLE COMPUTERIZED JOB SCHEDULING - A computer-implemented method for scheduling a set of jobs executed in a computer system can include determining a workload-time parameter for a set of at least one job. The workload-time parameter can relate to execution-time parameters for the set of at least one job. The method can include determining a schedule tuning parameter for the set of at least one job, the schedule tuning parameter based on the workload-time parameter. The method can include generating a scheduling factor for each job in the set, the scheduling factor generated based on the schedule tuning parameter. The method can include scheduling the set of at least one job based on the scheduling factor. | 03-31-2016 |
20160098294 | EXECUTION OF A METHOD AT A CLUSTER OF NODES - Systems and methods are disclosed for executing a clustered method at a cluster of nodes. An example method includes identifying an annotated class included in an application that is deployed on the cluster of nodes. An annotation of the class indicates that a clustered method associated with the annotated class is executed at each node in the cluster. The method also includes creating an instance of the annotated class and coordinating execution of the clustered method with one or more other nodes in the cluster. The method further includes executing, based on the coordinating, the clustered method using the respective node's instance of the annotated class. | 04-07-2016 |
20160098303 | GLOBAL LOCK CONTENTION PREDICTOR - An apparatus for lock acquisition is disclosed. A method and a computer program product also perform the functions of the apparatus. The apparatus includes a lock history module that adds a current contention state of a lock to a contention history. The lock includes a memory location for storing information used for excluding access to a resource by one or more threads while another thread accesses the resource. The apparatus, in some embodiments, includes a combination module that combines the contention history with a lock address for the lock to form a predictor table index, and a prediction module that uses the predictor table index to determine a lock prediction for the lock. The prediction includes a determination of an amount of contention. | 04-07-2016 |
20160103703 | APPLICATION-LEVEL DISPATCHER CONTROL OF APPLICATION-LEVEL PSEUDO THREADS AND OPERATING SYSTEM THREADS - An application-level thread dispatcher that operates in a main full-weight thread allocated to an application is established. The application-level thread dispatcher initializes a group of application-level pseudo threads that operate as application-controlled threads within the main full-weight thread allocated to the application. The application-level thread dispatcher determines that at least one application-level pseudo thread meets configuration requirements to operate within a separate operating system-level thread in parallel with the main full-weight thread. In response to determining that the at least one application-level pseudo thread meets the configuration requirements to operate within the separate operating system-level thread in parallel with the main full-weight thread, the at least one application-level pseudo thread is dispatched from the main full-weight thread to the separate operating system-level thread by the application-level thread dispatcher. | 04-14-2016 |
20160103706 | Automatically Generating Execution Sequences for Workflows - The present disclosure relates to automatically generating execution sequences from workflow definitions. One example method includes receiving a workflow definition including a plurality of operations, each of the plurality of operations including input attributes each associated with an input value and output attributes each associated with an output value; determining an execution sequence for the workflow defining relationships between the plurality of operations, the determining based at least in part on the one or more input attributes and associated input values, and the output attributes and associated output values for each operation, and at least in part on one or more semantic rules defining dependencies of each of the plurality of operations; and generating a directed acyclic graph representing the execution sequence including nodes each representing an operation from the plurality of operations, and vertices each representing a relationship between the plurality of operations defined by the execution sequence. | 04-14-2016 |
20160103707 | System and Method for System on a Chip - A method includes receiving, by a system on a chip (SoC) from a logically centralized controller, configuration information and reading, from a semantics aware storage module of the SoC, a data block in accordance with the configuration information. The method also includes performing scheduling to produce a schedule in accordance with the configuration information and writing the data block to an input data queue in accordance with the schedule to produce a stored data block. Additionally, the method includes writing a tag to an input tag queue to produce a stored tag, where the tag corresponds to the data block. | 04-14-2016 |
20160103710 | SCHEDULING DEVICE - The invention relates to a scheduling device for receiving a set of requests and providing a set of grants to the set of requests, the scheduling device comprising: a lookup vector prepare unit configured to provide a lookup vector prepared set of requests depending on the set of requests and a selection mask and to provide a set of acknowledgements to the set of requests; and a prefix forest unit coupled to the lookup vector prepare unit, wherein the prefix forest unit is configured to provide the set of grants as a function of the lookup vector prepared set of requests and to provide the selection mask based on the set of grants. | 04-14-2016 |
20160103711 | METHODS AND SYSTEMS TO OPTIMIZE DATA CENTER POWER CONSUMPTION - Methods and systems of determining an optimum power-consumption profile for virtual machines running in a data center are disclosed. In one aspect, a power-consumption profile of a virtual machine and a unit-rate profile of electrical power cost over a period are received. The methods determine an optimum power-consumption profile based on the power-consumption profile and the unit-rate profile. The optimum power-consumption profile may be used reschedule the virtual machine over the period. | 04-14-2016 |
20160110217 | OPTIMIZING EXECUTION OF PROCESSES - Methods and system for optimizing an execution of a business process are disclosed. In one aspect, a request to execute a business process is received. The business process is executed on multiple threads, which may include multiple computations. The business process is optimized by determining an optimal number of threads for executing the business process by a thread optimization model. From the determined optimal number of threads, the computations in the threads may be distributed or reallocated iteratively by executing an inter-thread computations optimization model. Executing the thread optimization model and the inter-thread computations optimization model optimizes the execution of the business process. | 04-21-2016 |
20160110220 | DYNAMIC SUGGESTION OF NEXT TASK BASED ON TASK NAVIGATION INFORMATION - A device may receive task navigation information, identify a selection of a first task, of multiple tasks, based on the task navigation information, and provide a list of a group of tasks from the multiple tasks. The list of the group tasks may be based on information identifying tasks historically selected subsequent to the selection of the first task. The device may identify a selection of a second task, of the multiple tasks subsequent to identifying the selection of the first task; and store information identifying that the second task has been selected subsequent to the first task based on identifying the selection of the second task subsequent to the selection of the first task. The information identifying that the second task has been selected subsequent to the first task may include a number of times that the second task has been selected subsequent to the first task. | 04-21-2016 |
20160110277 | Method for Computer-Aided Analysis of an Automation System - A method for computer-aided analysis of an automation system, where the automation system executes a number of jobs, each job being performed repetitively, wherein the execution durations of a respective job of the number of jobs for several repetitions of the respective job is determined to produce a plurality of execution durations, a statistical analysis on the plurality of execution durations is performed to produce at least one statistical quantity valid for the plurality of execution durations, and an action is performed for protecting the automation system and/or for generating a warning if a condition indicating an incorrect execution of the respective job is fulfilled for at least one statistical quantity. | 04-21-2016 |
20160117189 | Methods and Systems for Starting Computerized System Modules - Graph data of a DAG is received. The data describes a module to be started by way of nodes connected by edges, wherein some nodes are submodule nodes that correspond to submodules of said module. Submodule nodes are connected via edge(s) that reflect a data dependency between the corresponding submodules. Each of said submodules is a hardware module or a software submodule, capable of producing and/or consuming data that can be consumed and/or produced, by other submodule(s) of said module, based on the DAG. Asynchronous execution is started of two of said submodules, respectively corresponding to two submodule nodes located in independent branches of the DAG. A third submodule node(s) is determined that is a descendant of each of said two submodule nodes, according to an outcome of the execution of the corresponding two submodules. Execution is started of a third submodule that corresponds to the determined third submodule node. | 04-28-2016 |
20160117200 | RESOURCE MAPPING IN MULTI-THREADED CENTRAL PROCESSOR UNITS - A processor determines that processing of a thread is suspended due to limited availability of a processing resource. The processor supports execution of the plurality of threads in parallel. The processor obtains a lock on a second processing resource that is substitutable as a resource during processing of the first thread. The second processing resource is included as part of a component that is external to the processor. The component supports a number of threads that is less than the plurality of threads. The processing of the thread is suspended until the lock is available. The processor processes the first thread using the second processing resource. The processor includes a shared register to support mapping a portion of the plurality of threads to the component. The portion of the plurality of threads is equal to, at most, the number of threads supported by component. | 04-28-2016 |
20160117206 | METHOD AND SYSTEM FOR BLOCK SCHEDULING CONTROL IN A PROCESSOR BY REMAPPING - A method and a system for block scheduling are disclosed. The method includes retrieving an original block ID, determining a corresponding new block ID from a mapping, executing a new block corresponding to the new block ID, and repeating the retrieving, determining, and executing for each original block ID. The system includes a program memory configured to store multi-block computer programs, an identifier memory configured to store block identifiers (ID's), management hardware configured to retrieve an original block ID from the program memory, scheduling hardware configured to receive the original block ID from the management hardware and determine a new block ID corresponding to the original block ID using a stored mapping, and processing hardware configured to receive the new block ID from the scheduling hardware and execute a new block corresponding to the new block ID. | 04-28-2016 |
20160127061 | BROADCAST INTERFACE - An apparatus includes a transmitter configured to broadcast data to a plurality of receivers is provided. A control circuit is configured to arrange the transmitter to broadcast the data based on a protocol and to arrange the transmitter to broadcast a subset of the data in response to a request from one of the plurality of receivers. In another aspect, a method for operating a broadcast interface is provided. The method includes broadcasting data to a plurality of receivers based on a protocol and broadcasting a subset of the data in response to a request from one of the plurality of receivers. Another apparatus is provided which includes means for broadcasting data to the plurality of receivers based on a protocol and means for arranging the means for broadcasting to broadcast a subset of the data in response to a request from one of the plurality of receivers. | 05-05-2016 |
20160147567 | Incentive-Based App Execution - Systems and methods of a personal daemon, executing as a background process on a mobile computing device, for providing personal assistant to an associated user is presented. Also executing on the mobile computing device is a scheduling manager. The personal daemon executes one or more personal assistance actions on behalf of the associated user. The scheduling manager responds to events in support of the personal daemon. More particularly, in response to receiving an event the scheduling manager determines a set of apps that are responsive to the received event and from that set of apps, identifies at least a first subset of apps for execution on the mobile computing device. The scheduling manager receives feedback information regarding the usefulness of the executed apps of the first subset of apps and updates the associated score of each of the apps of the first subset of apps. | 05-26-2016 |
20160147568 | METHOD AND APPARATUS FOR DATA TRANSFER TO THE CYCLIC TASKS IN A DISTRIBUTED REAL-TIME SYSTEM AT THE CORRECT TIME - The invention relates to a method for the time-correct data transfer between cyclic tasks in a distributed real-time system, which real-time system comprises a real-time communication system and a multiplicity of computer nodes, wherein a local real-time clock in each computer node is synchronised with the global time, wherein all periodic trigger signals z | 05-26-2016 |
20160147576 | WAKE-UP ORDERING OF PROCESSING STREAMS USING SEQUENTIAL IDENTIFIERS - Systems and methods for waking up waiting processing streams in a manner that reduces the number of spurious wakeups. An example method may comprise: assigning a first identifier of a sequence of identifiers to a processing stream in a waiting state; receiving a wakeup signal associated with a second identifier of the sequence of identifiers; comparing, by a processing device, the first identifier with the second identifier; and waking the processing stream responsive to determining, in view of comparing, that the processing stream began waiting prior to an initiation of the wakeup signal. | 05-26-2016 |
20160147579 | Event Generation Management For An Industrial Controller - An improved system for handling events in an industrial control system is disclosed. A module in an industrial controller is configured to generate an event responsive to a predefined signal or combination of signals occurring. The event is transferred to an event queue for subsequent execution. The event queue may also be configured to store a copy of the state of the module at the time the event is generated. The event queue may hold multiple events and each event is configured to trigger at least one event task. Subsequent events that occur during execution of the event task are stored in the event queue for later execution. An event, or combination of events, may trigger execution of an event task within the module, within the controller to which the module is connected, or within multiple controllers. | 05-26-2016 |
20160162304 | Method to Identify and Define Application and Browser Uniform Resource Locator Chaining - Methods and systems may provide a way for a system to anticipate usage patterns and automatically open a chain of application and browser windows based on typical usage. Additionally, a user may manually identify and create the chain of application and browser windows. In one example, application and browser chaining may be correlated with location, time of day, and profile of the user logged into the system. | 06-09-2016 |
20160162330 | EPOLL OPTIMISATIONS - A method for managing I/O event notifications in a data processing system comprising a plurality of applications and an operating system having a kernel and an I/O event notification mechanism operable to maintain a plurality of I/O event notification objects each handling a set of file descriptors associated with one or more I/O resources. For each of a plurality of application-level configuration calls: intercepting at a user-level interface a configuration call from an application to the I/O event notification mechanism for configuring an I/O event notification object; and storing a set of parameters of the configuration call at a data structure, each set of parameters representing an operation on the set of file descriptors handled by the I/O event notification object; and subsequently, upon meeting a predetermined criterion: the user-level interface causing the plurality of configuration calls to be effected by means of a first system call to the kernel. | 06-09-2016 |
20160162333 | AUTOMATED CREATION OF EXECUTABLE WORKFLOW - A computing device receives information describing one or more workflow components. The computing device determines whether at least one executable step can be determined for each of the one or more workflow components. The computing device provides an indication of whether at least one executable step can be determined for each of the one or more workflow components. | 06-09-2016 |
20160170797 | MANAGING CALLBACK OPERATIONS IN EMULATED ENVIRONMENTS | 06-16-2016 |
20160188364 | DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. | 06-30-2016 |
20160188365 | COMPUTATIONAL UNIT SELECTION - A system and method for computing including compute units to execute a computing event, the computing event being a server application or a distributed computing job. A power characteristic or a thermal characteristic, or a combination thereof, of the compute units is determine. One or more of the compute units is selected to execute the computing event based on a selection criterion and on the characteristic. | 06-30-2016 |
20160196148 | Time Monitoring in a Processing Element and Use | 07-07-2016 |
20160196149 | MILESTONE BASED DYNAMIC MULTIPLE WATCHDOG TIMEOUTS AND EARLY FAILURE DETECTION | 07-07-2016 |
20160196163 | USING DATABASES FOR BOTH TRANSACTIONS AND ANALYSIS | 07-07-2016 |
20160196164 | METHOD AND APPARATUS FOR ANALYSIS OF THREAD LATENCY | 07-07-2016 |
20160202931 | MODULAR ARCHITECTURE FOR EXTREME-SCALE DISTRIBUTED PROCESSING APPLICATIONS | 07-14-2016 |
20160203020 | LOADING CALCULATION METHOD AND LOADING CALCULATION SYSTEM FOR PROCESSOR IN ELECTRONIC DEVICE | 07-14-2016 |
20160253207 | DEVICE AND METHOD OF RUNNING MULTIPLE OPERATING SYSTEMS | 09-01-2016 |
20160253209 | APPARATUS AND METHOD FOR SERIALIZING PROCESS INSTANCE ACCESS TO INFORMATION STORED REDUNDANTLY IN AT LEAST TWO DATASTORES | 09-01-2016 |
20160253220 | DATA CENTER OPERATION | 09-01-2016 |
20160378550 | OPTIMIZATION OF APPLICATION WORKFLOW IN MOBILE EMBEDDED DEVICES - An aspect includes optimizing an application workflow. The optimizing includes characterizing the application workflow by determining at least one baseline metric related to an operational control knob of an embedded system processor. The application workflow performs a real-time computational task encountered by at least one mobile embedded system of a wirelessly connected cluster of systems supported by a server system. The optimizing of the application workflow further includes performing an optimization operation on the at least one baseline metric of the application workflow while satisfying at least one runtime constraint. An annotated workflow that is the result of performing the optimization operation is output. | 12-29-2016 |
20160378879 | DETERMINING WEB PAGE PROCESSING STATE - Determining a web page processing state of a browser, during a processing of a web page using a browser, by setting parameters in a state determiner on the basis of predefined processing events related to queued processing tasks; the state determiner determining said web page processing state on the basis of said parameters in accordance with one or more predefined criteria. | 12-29-2016 |
20170235599 | NATURAL LANGUAGE CONVERSATION-BASED PROGRAMMING | 08-17-2017 |
20180024858 | FRAMEWORK FOR AUTHORING DATA LOADERS AND DATA SAVERS | 01-25-2018 |
20190146780 | LIVE KERNEL UPDATING USING PROGRESSIVE CHECKPOINTING AND NETWORK TUNNELS | 05-16-2019 |
20190146833 | Managing a Lifecycle of a Software Container | 05-16-2019 |
20190146834 | ELECTRONIC DEVICE AND METHOD OF CONTROLLING THE SAME | 05-16-2019 |
20190146836 | DATA FORWARDER FOR DISTRIBUTED DATA ACQUISITION, INDEXING AND SEARCH SYSTEM | 05-16-2019 |
20190146841 | SCHEDULING WORKLOAD SERVICE OPERATIONS USING VALUE INCREASE SCHEME | 05-16-2019 |
20220138006 | DISTRIBUTED STREAMING SYSTEM SUPPORTING REAL-TIME SLIDING WINDOWS - In various embodiments, a process for providing a distributed streaming system supporting real-time sliding windows includes receiving a stream of events at a plurality of distributed nodes and routing the events into topic groupings. The process includes using one or more events in at least one of the topic groupings to determine one or more metrics of events with at least one window and an event reservoir including by: tracking, in a volatile memory of the event reservoir, beginning and ending events within the at least one window; and tracking, in a persistent storage of the event reservoir, events associated with tasks assigned to a respective node. The process includes updating the one or more metrics based on one or more previous values of the one or more metrics as a new event is added or an existing event is expired from the at least one window. | 05-05-2022 |
20220138011 | NON-TRANSITORY COMPUTER-READABLE MEDIUM, MANAGEMENT APPARATUS, RELAY APPARATUS AND DISPLAY CONTROL METHOD - A non-transitory computer-readable medium storing a display control program readable by a computer of a relay apparatus of a terminal management system having a management apparatus, the relay apparatus, and a terminal apparatus, the display control program, when executed by the computer, causing the relay apparatus to perform: obtaining, from the management apparatus, execution timing information indicating an execution timing of a relay application configured to relay data between the management apparatus and the terminal apparatus via the relay apparatus, the relay application being activated by a user's activation operation performed on the relay apparatus; and based on the execution timing information obtained, setting a reservation for executing display processing of displaying, on the relay apparatus, an activation notification image prompting the user to activate the relay application when the execution timing comes. | 05-05-2022 |