Entries |
Document | Title | Date |
20080201713 | Project Management System - A method and apparatus for managing a project are described. According to one embodiment, the method includes the steps of ranking the plurality of tasks to produce a first list; assigning a task cost to each of the plurality of tasks; setting a planned velocity, the planned velocity determining the rate at which task costs are planned to be completed per time segment; and dynamically assigning each of the plurality of tasks to one of the sequence of time segments in the order indicated by the first list based on the planned velocity. In other embodiments, the apparatus includes a machine-readable medium that provides instructions for a processor, which when executed by the processor cause the processor to perform a method of the present invention. | 08-21-2008 |
20080201714 | INFORMATION PROCESSING APPARATUS FOR CONTROLLING INSTALLATION, METHOD FOR CONTROLLING THE APPARATUS AND CONTROL PROGRAM FOR EXECUTING THE METHOD - A server apparatus manages a device driver for enabling any of a plurality of devices to which a plurality of client apparatuses are connected on a network. The server apparatus comprises a storage unit that stores, for each device, a device driver that can be installed to the device in association with the device, a generating unit that generates different tasks for any of the stored device drivers, a creating unit that creates a schedule for executing the generated tasks, and an executing unit that executes the generated tasks based on the created schedule. | 08-21-2008 |
20080209426 | APPARATUS FOR RANDOMIZING INSTRUCTION THREAD INTERLEAVING IN A MULTI-THREAD PROCESSOR - A processor interleaves instructions according to a priority rule which determines the frequency with which instructions from each respective thread are selected and added to an interleaved stream of instructions to be processed in the data processor. The frequency with which each thread is selected according to the rule may be based on the priorities assigned to the instruction threads. A randomization is inserted into the interleaving process so that the selection of an instruction thread during any particular clock cycle is not based solely by the priority rule, but is also based in part on a random or pseudo random element. This randomization is inserted into the instruction thread selection process so as to vary the order in which instructions are selected from the various instruction threads while preserving the overall frequency of thread selection (i.e. how often threads are selected) set by the priority rule. | 08-28-2008 |
20080222640 | Prediction Based Priority Scheduling - Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing. | 09-11-2008 |
20080229316 | DATA PROCESSING DEVICE AND ELECTRONIC DEVICE - A data processing device includes: an execution unit; and a memory unit, wherein the memory unit stores a plurality of pre-processing data on which a processing is to be rendered at a plurality of times prior to a specified time; (1) when a value of specified pre-processing data at the specified time is in a range between a maximum value and a minimum value among values of the plurality of pre-processing data, the execution unit renders the processing on the specified pre-processing data; and (2) when the value of the specified pre-processing data is greater than the maximum value or smaller than the minimum value, the execution unit renders the processing on an arbitrary value that is deemed substantively in the range between the maximum value and the minimum value, instead of the value of the specified pre-processing data. | 09-18-2008 |
20080229317 | Method for optimizing a link schedule - A method for improving a link schedule used in a communications network is disclosed. While the method applies generally to networks that operate on a scheduled communications basis, it is described in the context of a Foundation FIELDBUS. The method includes: scheduling sequences and their associated publications according to their relative priority, per application; minimizing delays between certain function blocks, and between certain function blocks and publications; and grouping certain publications. Accordingly, advantages such as latency reduction, schedule length reduction, and improved communications capacity are gained. | 09-18-2008 |
20080235693 | Methods and apparatus for window-based fair priority scheduling - A system provides a task scheduler to define a priority queue with at least one window and a queue-window key. Each window is an ordered collection of tasks in a task pool of the priority queue and is identified by the queue-window key. The task scheduler sets a task-window key equal to a user-window key when the user-window key is greater than the minimum queue-window key. The task scheduler can further set the task-window key equal to the minimum queue-window key when the user-window key is less than the minimum queue-window key. A maximum task limit per user for each window and a priority increment for the user-window key are further applied to ensure fair scheduling. | 09-25-2008 |
20080235694 | Method of Launching Low-Priority Tasks - A driver is provided to manage launching of tasks at different levels of priority and within the parameters of the firmware interface. The driver includes two anchors for managing the tasks, a dispatcher and an agent. The dispatcher operates at a medium priority level and manages communication from a remote administrator. The agent functions to receive communications from the dispatcher by way of a shared data structure and to launch lower priority level tasks in respond to the communication. The shared data structure stores communications received from the dispatcher. Upon placing the communication in the shared data structure, the dispatcher sends a signal to the agent indicating that a communication is in the data structure for reading by the agent. Following reading of the communication in the data structure, the agent launches the lower priority level task and sends a signal to the data structure indicating the status of the task. Accordingly, a higher level task maintains its level of operation and spawns lower level tasks through the dispatcher in conjunction with the agent. | 09-25-2008 |
20080235695 | RESOURCE ALLOCATION SYSTEM FOR JOBS, RESOURCE ALLOCATION METHOD AND RESOURCE ALLOCATION PROGRAM FOR JOBS - A resource allocation system for jobs includes: a timer for notifying switch of priority jobs in a priority period based on a predetermined processor priority allocation time of each job; a dispatcher for taking out a head process from a ready queue which is a queue of a process corresponding to a job selected as a priority job and being executable by an information processing system, for each job, based on the notification, and for allocating it to an instruction execution unit; and the instruction execution unit for executing an instruction of an executing process which is an allocated process. | 09-25-2008 |
20080235696 | ACCESS CONTROL APPARATUS AND ACCESS CONTROL METHOD - The disclosed access control apparatus and method controls an I/O device to perform processing of access requests in a predetermined order including inputting access requests from multiple tasks to cause the I/O device to perform file processing, storing and managing information about file priorities, obtaining a file priority corresponding to an access request, managing a queue having multiple queues for which the processing priorities corresponding to the file priorities are set and causing the access request to be stored in any of the queues corresponding to the file priority, and obtaining the access requests stored in the queues in an order based on the processing priorities set for the queues and sends the access requests to the I/O device. | 09-25-2008 |
20080235697 | Job scheduler, job scheduling method, and job control program storage medium - To provide a job scheduler, a job scheduling method, and a job control program that are capable of, even with an incapable CPU not equipped with a real-time OS, meeting basic real-time property that is required in a system. The job scheduler is a job scheduler | 09-25-2008 |
20080235698 | METHOD AND APPARATUS FOR ASSIGNING CANDIDATE PROCESSING NODES IN A STREAM-ORIENTED COMPUTER SYSTEM - A method of choosing jobs to run in a stream based distributed computer system includes determining jobs to be run in a distributed stream-oriented system by deciding a priority threshold above which jobs will be accepted, below which jobs will be rejected. Overall importance is maximized relative to the priority threshold based on importance values assigned to all jobs. System constraints are applied to ensure jobs meet set criteria. | 09-25-2008 |
20080244592 | MULTITASK PROCESSING DEVICE AND METHOD - There is provided with a multitask processing device for processing a plurality of tasks by multitask, the tasks being each split into at least two sections, including: a stable set storage configured to store a stable set including one or more section combinations; a program execution state calculator configured to calculate, for each of the tasks, a program execution state including a section where execution is to start when the task is next executed and current sections of other tasks different from the task among the tasks; a distance calculating unit configured to calculate a distance between each of the program execution states and the stable set; and a task execution unit configured to select and execute a next task to be executed next based on calculated distances. | 10-02-2008 |
20080244593 | TASK ROSTER - A task roster. A task roster can include a visual list of component tasks, the component tasks collectively forming a high-level task; a specified sequence in which the component tasks are to be performed; and, one or more visual status indicators, each visual status indicator having a corresponding component task, each visual status indicator further indicating whether the corresponding component task has been performed in the specified sequence. The task roster also can include a component task initiator configured to launch a selected component task in the visual list of component tasks upon a user-selection of the selected component task. | 10-02-2008 |
20080250415 | Priority based throttling for power/performance Quality of Service - A method and apparatus for throttling power and/or performance of processing elements based on a priority of software entities is herein described. Priority aware power management logic receives priority levels of software entities and modifies operating points of processing elements associated with the software entities accordingly. Therefore, in a power savings mode, processing elements executing low priority applications/tasks are reduced to a lower operating point, i.e. lower voltage, lower frequency, throttled instruction issue, throttled memory accesses, and/or less access to shared resources. In addition, utilization logic potentially trackes utilization of a resource per priority level, which allows the power manager to determine operating points based on the effect of each priority level on each other from the perspective of the resources themselves. Moreover, a software entity itself may assign operating points, which the power manager enforces. | 10-09-2008 |
20080256544 | Stateless task dispatch utility - Computer resource management techniques involving receiving notification of an available resource, generating a set of tasks that could be performed by the resource, and dispatching one of the tasks on the resource. Related systems and software are also discussed. Some techniques can be used for automatic software building and testing. | 10-16-2008 |
20080263555 | Task Processing Scheduling Method and Device for Applying the Method - The invention relates to a method for scheduling the processing of tasks and to the associated device, the processing of a task comprising a step for configuring resources required for executing the task and a step for executing the task on the thereby configured resources, the method comprising a selection ( | 10-23-2008 |
20080271027 | Fair share scheduling with hardware multithreading - An embodiment of the invention provides an apparatus and method for fair share scheduling with hardware multithreading. The apparatus and method include the acts of: executing, by a first hardware thread in a processor core, a first software thread belonging to a fair share group; and permitting a second hardware thread in the processor core to execute a second software thread if that second software thread belongs to the fair share group. | 10-30-2008 |
20080271028 | INFORMATION PROCESSING APPARATUS - According to one embodiment, an information processing apparatus, executing a process including a plurality of threads for reproduction of moving image data, includes a storage unit which stores priority information indicating a priority of a process of each of threads upon executing the process of the plurality of threads, and a processing unit which reads the priority information from the storage unit, reads the read priority information as a definition file, and executes the process of each of the threads in accordance with the priority information of the definition file. | 10-30-2008 |
20080271029 | Thread Scheduling with Weak Preemption Policy - Thread scheduling with a weak preemption policy is provided. The scheduler receives requests from newly ready work. The scheduler adds a “preempt value” to the current work's priority so that it is somewhat increased for preemption purposes. The preempt value can be adjusted in order to make it more, or less, difficult for newly ready work to preempt the current work. A “less strict” preemption policy allows current work to complete rather than interrupting the current work and resume it at a later time, thus saving system overhead. Newly ready work that is queued with a better priority than the current work is queued in a favorable position to be executed after the current work is completed but before other work that has been queued with the same priority of the current work. | 10-30-2008 |
20080276241 | DISTRIBUTED PRIORITY QUEUE THAT MAINTAINS ITEM LOCALITY - A method of administering a distributed priority queue structure that includes removing a highest priority item from a current root node of a tree structure to create a temporary root node, determining for each subtree connected to the temporary root node a subtree priority comprising the priority of the highest priority data item in the each subtree, determining as the highest priority subtree connected to the temporary root node the subtree connected to the temporary root node having the highest subtree priority, determining whether any of the one or more data items stored at the temporary root node has a higher priority than the highest subtree priority and directing an arrow to the subtree having the highest priority or to the temporary root itself if the priority of the data items stored at temporary root is higher than the priorities of the connected subtrees. | 11-06-2008 |
20080276242 | Method For Dynamic Scheduling In A Distributed Environment - A method and system is provided for assigning programs in a workflow to one or more nodes for execution. Prior to the assignment, a priority of execution of each program is calculated in relation to its dependency upon data received and transmitted data. Based upon the calculated priority and the state of each of the nodes, the programs in the workflow are dynamically assigned to one or more nodes for execution. In addition to the node assignment based upon priority, preemptive execution of the programs in the workflow is determined so that the programs in the workflow may not preemptively be executed at a selected node in response to the determination. | 11-06-2008 |
20080282251 | THREAD DE-EMPHASIS INSTRUCTION FOR MULTITHREADED PROCESSOR - A technique for scheduling execution of threads at a processor is disclosed. The technique includes executing a thread de-emphasis instruction of a thread that de-emphasizes the thread until the number of pending memory transactions, such as cache misses, associated with the thread are at or below a threshold. While the thread is de-emphasized, other threads at the processor that have a higher priority can be executed or assigned system resources. Accordingly, the likelihood of a stall in the processor is reduced. | 11-13-2008 |
20080288945 | METHOD AND SYSTEM FOR ANALYZING INTERRELATED PROGRAMS - A method for analyzing a program having a budget, an implementation schedule and a deployment plan includes: (a) in no particular order: (1) providing a digital representation of the budget including first entries; (2) providing a digital representation of the schedule including second entries having a first relation with the first entries; and (3) providing a digital representation of the deployment plan including third entries having at least one second relation with at least one of the first and second entries; (b) establishing an expression embodying the first and second relations; (c) exercising the expression to alter at least one altered entry of the selected first second and third entries; and (d) observing at least one entry of the selected first second and third entries other than the at least one altered entry. | 11-20-2008 |
20080288946 | States matrix for workload management simplification - A computer-implemented method, system and article of manufacture for managing workloads in a computer system, comprising monitoring system conditions and operating environment events that impact on the operation of the computer system, using an n-dimensional state matrix to identify at least one state resulting from the monitored system conditions and operating environment events, and initiating an action in response to the identified state. | 11-20-2008 |
20080288947 | SYSTEMS AND METHODS OF DATA STORAGE MANAGEMENT, SUCH AS DYNAMIC DATA STREAM ALLOCATION - A system and method for choosing a stream to transfer data is described. In some cases, the system reviews running data storage operations and chooses a data stream based on the review. In some cases, the system chooses a stream based on the load of data to be transferred. | 11-20-2008 |
20080288948 | SYSTEMS AND METHODS OF DATA STORAGE MANAGEMENT, SUCH AS DYNAMIC DATA STREAM ALLOCATION - A system and method for choosing a stream to transfer data is described. In some cases, the system reviews running data storage operations and chooses a data stream based on the review. In some cases, the system chooses a stream based on the load of data to be transferred. | 11-20-2008 |
20080295104 | Realtime Processing Software Control Device and Method | 11-27-2008 |
20080295105 | Data processing apparatus and method for managing multiple program threads executed by processing circuitry | 11-27-2008 |
20080301687 | SYSTEMS AND METHODS FOR ENHANCING PERFORMANCE OF A COPROCESSOR - Techniques for minimizing coprocessor “starvation,” and for effectively scheduling processing in a coprocessor for greater efficiency and power. A run list is provided allowing a coprocessor to switch from one task to the next, without waiting for CPU intervention. A method called “surface faulting” allows a coprocessor to fault at the beginning of a large task rather than somewhere in the middle of the task. DMA control instructions, namely a “fence,” a “trap” and a “enable/disable context switching,” can be inserted into a processing stream to cause a coprocessor to perform tasks that enhance coprocessor efficiency and power. These instructions can also be used to build high-level synchronization objects. Finally, a “flip” technique is described that can switch a base reference for a display from one location to another, thereby changing the entire display surface. | 12-04-2008 |
20080320481 | Method and Apparatus for Playing Dynamic Content - A method for playing dynamic content includes: allocating and occupying playing resources for playing of dynamic contents by dynamic content priority; preempting playing resources occupied by dynamic contents of lower priorities to play back dynamic contents of higher priorities in precedence. The dynamic contents of which the playing resources are preempted can be handled as appropriate in accordance with the preset processing policy. A playing apparatus for playing dynamic content includes a content receiving module, a storage unit, a play scheduling module, a content playing module, and a user configuration module. The present invention supports automatic playing of dynamic contents by priority and in accordance with the policy preset by the user, and can be implemented simply and conveniently. | 12-25-2008 |
20090007123 | Dynamic Application Scheduler in a Polling System - A dynamic scheduling system is provided that comprises a processor, a polling task, a work task, and a scheduler assistant task. The polling task is configured for execution by the processor, wherein the polling task executes during a first CPU time window and sleeps during a second CPU time window. The work task is configured for an execution during the second CPU time window. The scheduler assistant (SA) task has an execution state to indicate to the polling task a status of the execution of the work task to the polling task. The SA task is configured to run if the work task runs to completion within the second CPU time window. | 01-01-2009 |
20090007124 | Method and mechanism for memory access synchronization - The present invention is a method and mechanism of multiple processors synchronization. Calling global memory fence (GMF) service raises asynchronous memory fence being executed on other processors. By guarantee that asynchronous memory fence (AMF) or equivalence on other processors are executed within the window of global memory fence (GMF) service call, the expensive memory ordering semantics can be removed from the critical path of frequently-executed application code. Therefore, the overall performance is improved in modern processor architectures. | 01-01-2009 |
20090031317 | SCHEDULING THREADS IN MULTI-CORE SYSTEMS - Scheduling of threads in a multi-core system is performed using per-processor queues for each core to hold threads with fixed affinity for each core. Cores are configured to pick the highest priority thread among the global run queue, which holds threads without affinity, and their respective per-processor queue. To select between two threads with same priority on both queues, the threads are assigned sequence numbers based on their time of arrival. The sequence numbers may be weighted for either queue to prioritize one over the other. | 01-29-2009 |
20090031318 | APPLICATION COMPATIBILITY IN MULTI-CORE SYSTEMS - Scheduling of threads in a multi-core system running various legacy applications along with multi-core compatible applications is configured such that threads from older single thread applications are assigned fixed affinity. Threads from multi-thread/single core applications are scheduled such that one thread at a time is made available to the cores based on the thread priority preventing conflicts and increasing resource efficiency. Threads from multi-core compatible applications are handled regularly. | 01-29-2009 |
20090031319 | TASK SCHEDULING METHOD AND APPARATUS - A method of scheduling execution of a plurality of tasks by a processor, the processor having a processor memory, the processor being arranged to load into the processor memory, during execution of a current task, data for a task that is scheduled for execution after the processor has completed the current task, the method comprising the steps of scheduling a next task for execution by the processor after the processor has completed a current task, and determining whether there is a high priority task to be executed by the processor, if there is a high priority task to be executed by the processor: determining whether the processor has begun loading the data for the next task into the processor memory, and if the processor has not begun loading the data for the next task into the processor memory, scheduling the high priority task, instead of the next task, for execution by the processor after the processor has completed the current task. | 01-29-2009 |
20090037918 | Thread sequencing for multi-threaded processor with instruction cache - Execution of the first thread of a new program is prioritized ahead of older threads for a previously running program. The new program is invoked during the execution of a thread of the previous program. The first thread of the program is prioritized ahead of the remaining threads of the previous program. In an embodiment of the invention, additional threads of the new program are also prioritized ahead of the older threads. A thread's context may include a table of constant values that can be referenced by each program and are shared by multiple threads. Changing the values in a constant table for a new thread is time intensive. To avoid changes to the constant table (and thereby save time), a higher priority status is conferred to the first thread that follows a change to the constant table. | 02-05-2009 |
20090037919 | Information-Theoretic View of the Scheduling Problem in Whole-Body Computer Aided Detection/Diagnosis (CAD) - A method for automatically scheduling tasks in whole-body computer aided detection/diagnosis (CAD), including: (a) receiving a plurality of tasks to be executed by a whole-body CAD system; (b) identifying a task to be executed, wherein the task to be executed has an expected information gain that is greater than that of each of the other tasks; (c) executing the task with the greatest expected information gain and removing the executed task from further analysis; and (d) repeating steps (b) and (c) for the remaining tasks. | 02-05-2009 |
20090049446 | PROVIDING QUALITY OF SERVICE VIA THREAD PRIORITY IN A HYPER-THREADED MICROPROCESSOR - A method and apparatus for providing quality of service in a multi-processing element environment based on priority is herein described. Consumption of resources, such as a reservation station and a pipeline, are biased towards a higher priority processing element. In a reservation station, mask elements are set to provide access for higher priority processing elements to more reservation entries. In a pipeline, bias logic provides a ratio of preference for selection of a high priority processing | 02-19-2009 |
20090049447 | METHODS AND SYSTEMS FOR CARE READINESS - Provided are methods and systems for generating a care plan. The methods, which can be implemented as a Parent Care Readiness Program (PCR-P), can use information and resources to improve caregiving readiness for imminent and active care givers. In an aspect, the Parent Care Readiness program can comprise two, complementary, automated, comprehensive, evidence-based assessments of the landscape of caregiving tasks, one from adult child's and one from parent's perspective, and a tailored intervention program that care givers and care receivers can discuss and implement. | 02-19-2009 |
20090055828 | Profile engine system and method - A system for profile record generation of input records, the system comprising: a record processor which converts the input records into a data records suitable for the profile record generation; and a statistics engine for the generation of profile records based on the data records. Furthermore, system optimization can be obtained by use of a task control method that sub-divides the aggregations of profile records into units of work that can be individually performed, the method comprising: partitioning based on a pre-determined partitioning key associated with entities to be profiled, wherein the association between the partitioning key and the entities being profiled is varied in order to optimize the profiling performance. | 02-26-2009 |
20090055829 | METHOD AND APPARATUS FOR FINE GRAIN PERFORMANCE MANAGEMENT OF COMPUTER SYSTEMS - A system and method to control the allocation of processor (or state machine) execution resources to individual tasks executing in computer systems is described. By controlling the allocation of execution resources, to all tasks, each task may be provided with throughput and response time guarantees. This control is accomplished through workload metering shaping which delays the execution of tasks that have used their workload allocation until sufficient time has passed to accumulate credit for execution (accumulate credit over time to perform their allocated work) and workload prioritization which gives preference to tasks based on configured priorities. | 02-26-2009 |
20090064153 | COMMAND SELECTION METHOD AND ITS APPARATUS, COMMAND THROW METHOD AND ITS APPARATUS - When selecting one command within a processor from a plurality of command queues vested with order of priority, the order of priority assigned to the plurality of command queues is dynamically changed so as to select a command, on a priority basis, from a command queue vested with a higher priority from among the plurality of command queues in accordance with the post-change order of priority. | 03-05-2009 |
20090064154 | IMAGE RECONSTRUCTION SYSTEM WITH MULTIPLE PARALLEL RECONSTRUCTION PIPELINES - In a method, system, computer-readable medium and watchdog module to control a number of medical technology processes that are executed in multiple computerized pipelines according to a predetermined organizational structure, a priority is associated with an incoming process, with a high priority and multiple low priorities being provided. A process with a high priority is executed in a priority pipeline among the multiple pipelines. | 03-05-2009 |
20090064155 | TASK MANAGER AND METHOD FOR MANAGING TASKS OF AN INFORMATION SYSTEM - Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to. | 03-05-2009 |
20090070765 | XML-BASED CONFIGURATION FOR EVENT PROCESSING NETWORKS - An event server running an event driven application implementing an event processing network the event processing network can include at least one processor to implement a rule an at least one input stream. Priority for parts of the event processing network can be settable by a user. | 03-12-2009 |
20090083745 | Techniques for Maintaining Task Sequencing in a Distributed Computer System - A technique for operating a distributed computer system includes receiving one or more current processing task elements. Each of the one or more respective current processing elements is associated with a different task that is currently being processed in a server cluster. A first task element is selected from the one or more respective current processing task elements and respective servers in the server cluster are requested to update pending task elements, including the one or more respective current processing task elements, based on the first task element. | 03-26-2009 |
20090083746 | METHOD FOR JOB MANAGEMENT OF COMPUTER SYSTEM - A method for job management of a computer system, a job management system, and a computer-readable recording medium are provided. The method includes selecting, as a second job, a running job which is lower in priority than a first job and a number of computing nodes required for execution of which is not smaller than a deficient number of computing nodes due to execution of the first job when a number of free computing nodes in a cluster of the computer system is smaller than a number of computing nodes required for the first job, suspending all processes of the second job and executing the first job in the computing nodes which were used by the second job and the free computing nodes, and resuming execution of the second job after execution of the first job is completed. | 03-26-2009 |
20090089786 | SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE FOR REAL-TIME PROCESSING - A technology capable of efficiently performing the processes by using limited resources in an LSI where a plurality of real-time applications are parallelly processed is provided. To provide such a technology, a mechanism is provided in which a plurality of processes to be executed on a plurality of processing units in an LSI are managed throughout the LSI in a unified manner. For each process to be managed, a priority is calculated based on the state of progress of the process, and the execution of the process is controlled according to the priority. A resource management unit IRM or program that collects information such as a process state from each of the processing units executing the processes and calculates a priority for each process is provided. Also, a programmable interconnect unit and storage means for controlling a process execution sequence according to the priority are provided. | 04-02-2009 |
20090100431 | DYNAMIC BUSINESS PROCESS PRIORITIZATION BASED ON CONTEXT - Instantiated business processes are dynamically prioritized to an execution priority level based upon a priority relevant context associated with the business process. The business process instance is further executed based upon the execution priority level. The execution priority level for the business process instance may be determined using at least one of a table lookup, a rule or an algorithm to determine the execution priority level. Moreover, the execution priority level may be set based upon available priority levels in a priority band. Still further, detected changes in the priority relevant context may trigger changing the execution priority level based upon the change in the priority relevant context. Resources allocated to implement the business process instance may also be dynamically adjusted based upon changes to the execution priority level of an associated business process instance. | 04-16-2009 |
20090100432 | FORWARD PROGRESS MECHANISM FOR A MULTITHREADED PROCESSOR - A processing device includes a storage component configured to store instructions associated with a corresponding thread of a plurality of threads, and an execution unit configured to fetch and execute instructions. The processing device further includes a period timer comprising an output to provide an indicator in response to a count value of the period timer reaching a predetermined value based on a clock signal. The processing device additionally includes a plurality of thread forward-progress counter components, each configured to adjust a corresponding execution counter value based on an occurrence of a forward-progress indicator while instructions of a corresponding thread are being executed. The processing device further includes a thread select module configured to select threads of the plurality of threads for execution by the execution unit based a state of the period timer and a state of each of the plurality of thread forward-progress counter components. | 04-16-2009 |
20090100433 | DISK SCHEDULING METHOD AND APPARATUS - The present invention relates to a method and apparatus for scheduling requests having priorities and deadlines for an I/O operation on a disk storage medium. Requests are normally arranged and processed in deadline order, and requests whose process times based on deadlines overlap each other are processed in priority order. Therefore, it is possible to prevent processing of any requests having relatively higher priorities from being delayed due to a process based on deadline order. Further, in order to minimize seek time, the requests may also be processed in the scanning order. Furthermore, in order to minimize a time required for performing request search and arrangement in the scanning order and the deadline order, a deadline queue where requests are arranged in deadline order and a scan order queue where requests are arranged in the scanning order may be separately prepared. | 04-16-2009 |
20090100434 | TRANSACTION MANAGEMENT - A method and transaction processing system for managing transaction processing tasks are provided. The transaction processing system comprises a transaction log, a log management policy, a log manager and a dispatcher. The method comprises maintaining a transaction log of recoverable changes made by transaction processing tasks and storing a log management policy including at least one log threshold. Usage of the log by transaction processing tasks is then monitored to determine when a log threshold is reached. When a log threshold is reached the active task having the oldest log entry of all active tasks is identified and its dispatching priority is increased. This increases the likelihood that the identified task will be dispatched, and should mean that the task will more quickly reach normal completion. | 04-16-2009 |
20090106761 | Programmable Controller with Multiple Processors Using a Scanning Architecture - Operating a programmable controller with a plurality of processors. The programmable controller may utilize a first subset of the plurality of processors for a scanning architecture. The first subset of the plurality of processors may be further subdivided for execution of periodic programs or asynchronous programs. The programmable controller may utilize a second subset of the plurality of processors for a data acquisition architecture. Execution of the different architectures may occur independently and may not introduce significant jitter (e.g., for the scanning architecture) or data loss/response time lag (e.g., for the data acquisition architecture). However, the programmable controller may operate according to any combination of the divisions and/or architectures described herein. | 04-23-2009 |
20090106762 | Scheduling Threads In A Multiprocessor Computer - Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register. | 04-23-2009 |
20090113437 | TRANSLATING DECLARATIVE MODELS - The present invention extends to methods, systems, and computer program products for translating declarative models. Embodiments of the present invention facilitate processing declarative models to perform various operations on applications, such as, for example, application deployment, application updates, application control such as start and stop, application monitoring by instrumenting the applications to emit events, and so on. Declarative models of applications are processed and realized onto a target environment, after which they can be executed, controlled, and monitored. | 04-30-2009 |
20090113438 | OPTIMIZATION OF JOB DISTRIBUTION ON A MULTI-NODE COMPUTER SYSTEM - A method and apparatus optimizes job and data distribution on a multi-node computing system. A job scheduler distributes jobs and data to compute nodes according to priority and other resource attributes to ensure the most critical work is done on the nodes that are quickest to access and with less possibility of node communication failure. In a tree network configuration, the job scheduler distributes critical jobs and data to compute nodes that are located closest to the I/O nodes. Other resource attributes include network utilization, constant data state, and class routing. | 04-30-2009 |
20090113439 | Method and Apparatus for Processing Data - Methods and apparatuses for processing data are provided. In one embodiment, a data processing operation which is assigned a predefined maximum duration is started. The progress of the data processing operation is checked at a predefined point in time and a priority of the data processing operation is changed on the basis of the progress of the data processing operation. | 04-30-2009 |
20090119671 | REGISTERS FOR DATA TRANSFERS - A system and method for employing registers for data transfer in multiple hardware contexts and programming engines to facilitate high performance data processing. The system and method includes a processor that includes programming engines with registers for transferring data from one of the registers residing in an executing programming engine to a subsequent one of the registers residing in an adjacent programming engine. | 05-07-2009 |
20090125909 | Device, system, and method for multi-resource scheduling - A method, apparatus and system for selecting a highest prioritized task for executing a resource from one of a first and second expired scheduling arrays, where the first and second expired scheduling arrays may prioritize tasks for using the resource, and where tasks in the first expired scheduling array may be prioritized according to a proportionality mechanism and tasks in the second expired scheduling array may be prioritized according to an importance factor determined, for example, based on user input, and executing the task. Other embodiments are described and claimed. | 05-14-2009 |
20090133026 | METHOD AND SYSTEM TO IDENTIFY CONFLICTS IN SCHEDULING DATA CENTER CHANGES TO ASSETS - An information technology services management product is provided with a change management component that identifies conflicts based on a wide range of information. When a change on a configuration item is scheduled, the change management component identifies, for example, affected business applications, affected service level agreements, resource availability, change schedule, workflow, resource dependencies, and the like. The change management component warns the user if a conflict is found. The user does not have to consult multiple sources of information and make a manual determination concerning conflicts. The change management component may also suggest a best time to schedule a change request based on the information available. The change management component provides a constrained interface such that the user cannot schedule a change request that violates any of the above requirements. The change management component also applies these requirements when changing an already scheduled change request. | 05-21-2009 |
20090133027 | Systems and Methods for Project Management Task Prioritization - A project management task prioritization system is provided to refine the prioritization factors for tasks in a project based on changes to the order of performing the tasks. The initial proposed order for performing the tasks is provided by the system to the person responsible for the task in a graphical format that allows the person to drag and drop the tasks, adjusting the order of the tasks to their preferred order. A neural network comparator is used to compare the task prioritization factors associated with each pair of tasks that are altered in order to determine a relative priority. The neural network system updates the task prioritization factors based on the changes in order the tasks are to be performed. | 05-21-2009 |
20090138880 | Method for organizing a multi-processor computer - The invention relates to computer engineering and can be used for developing new-architecture multiprocessor multithreaded computers. The aim of the invention is to produce a novel method for organizing a computer, devoid of the disadvantageous feature of existing multithreaded computers, i.e., overhead costs due to the reload of thread descriptors. The inventive method encompasses using a distributed presentation which does not require loading the thread descriptors in the computer multi-level virtual memory, whereby providing, together with current synchronizing hardware, the uniform representation of all independent activities in the form of threads, the multi-program control of which is associated with a priority pull-down with an accuracy of individual instructions and is totally carried out by means of hardware. | 05-28-2009 |
20090144739 | PERSISTENT SCHEDULING TECHNIQUES - Techniques for persistent scheduling are provided. A principal registers a schedule with a network-based scheduling service. The scheduling service determines when a trigger is to be sent to a client associated with the principal for purposes of having that client process a particular action. The trigger is sent when the client is detected as being online; and when the client is offline, the trigger is sent as soon as the client comes online. Furthermore, once a trigger is successfully sent, a current date and time that the trigger was sent is maintained with the schedule for the client. | 06-04-2009 |
20090144740 | Application-based enhancement to inter-user priority services for public safety market - A system and method for application based enhancement to the traditional per-user based inter-user priority services is provided. This method includes provisioning a user's profile, not only with an assigned inter-user priority, but also with zero, one or more specified and provisioned applications that are considered as critical applications which require special preferential treatment by the access network. The method continues with accessing the inter-user priority profile associated for sessions established for the user. The system then recognizes that a session may have been assigned to at least one provisioned critical application. The system may then provide inter-user priority services operative to provide the specified preferential treatment for at least the critical applications associated with the session when the critical application(s) are activated. In this form, the critical applications are better served including protection again congestion and availability of resources whenever they are needed. This system may grant preferential treatment on a session and/or application basis so that there will be no impact on other general applications when no critical applications are activated. This is especially useful for public safety implementation where protecting the mission-critical communication is a fundamental requirement. | 06-04-2009 |
20090150891 | RESPONSIVE TASK SCHEDULING IN COOPERATIVE MULTI-TASKING ENVIRONMENTS - Task scheduling in cooperative multi-tasking environments is accomplished by a task scheduler that evaluates the relative priority of an executing task and tasks in a queue waiting to be executed. The task scheduler may issue a suspend request to lower priority tasks so that high priority tasks can be executed. Tasks are written or compiled with checks located at opportune locations for suspending and resuming the given task. The tasks under a suspend request continue operation until they reach a check, at which point the task will suspend operation depending on specific criteria. By allowing both the task and the task scheduler to assist in determining the precise timing of the suspension, the multi-tasking environment becomes highly efficient and responsive. | 06-11-2009 |
20090150892 | Interrupt controller for invoking service routines with associated priorities - An interrupt controller efficiently manages execution of tasks by a multiprocessor computing system. The interrupt controller has inputs for receiving service requests for invoking service routines. The service routines have higher priorities than the tasks executed on the processors. Associated with each processor is a register for storing the priority of the task executing on the processor. A comparator coupled to the processors determines the processor executing the task having a lower priority among the priorities of the tasks executing on the processors. For each service request received, a distributor generates an interrupt request for invoking the service routine of the service request on the processor with the lower priority. The register with the lower priority is set to the higher priority of the service routine in response to the interrupt request. For each processor, the interrupt controller has an output for transmitting the interrupt request to the processor. | 06-11-2009 |
20090158288 | METHOD AND APPARATUS FOR MANAGING SYSTEM RESOURCES - A computer implemented method, apparatus, and computer usable program product for system management. The process schedules a set of application tasks to form a schedule of tasks in response to receiving the set of application tasks from a registration module. The process then performs a feasibility analysis on the schedule of tasks to identify periods of decreased system activity. Thereafter, the process schedules a set of system management tasks during the periods of decreased system activity to form a prioritized schedule of tasks. | 06-18-2009 |
20090165007 | TASK-LEVEL THREAD SCHEDULING AND RESOURCE ALLOCATION - Task schedulers endeavor to share computing resources, such as the CPU, among many threads. However, the task scheduler may be unable to identify the resources that will be utilized by a thread, and may allocate resources inefficiently due to incorrect predictions of resource utility. Task scheduling may be improved by identifying the rate determining factors for various thread tasks comprising a thread, e.g., a first task that is rate-limited by a communications bus, a second task that is rate-limited by the CPU, and a third task that is rate-limited by a communications network. If the instructions are so identified, the operating system may be able to schedule tasks and to allocate resources based on the resources to be utilized by the threads, which may improve efficiency and computing performance. | 06-25-2009 |
20090165008 | APPARATUS AND METHOD FOR SCHEDULING COMMANDS FROM HOST SYSTEMS - A scheduling apparatus and method thereof are disclosed. The scheduling apparatus includes a command-collecting module, a sorting module and a command-executing module. The command-collecting module collects the commands issued from the host systems. The sorting module sorts the collected commands from the command-collecting module based on a plurality of data addresses. The data addresses within the storage unit are associated with the commands. The command executing module executes the sorted commands from the sorting module. | 06-25-2009 |
20090165009 | OPTIMAL SCHEDULING FOR CAD ARCHITECTURE - A system and method for optimal scheduling of image processing jobs is provided. Requests for processing originate either from a DICOM service that receives images sent to the system, and forwards those for batch processing, or from an interactive workstation application, which requests interactive CAD processing. Each request is placed onto a queue which is sorted first by priority, and second by the time that the request is added to the queue. Requests for interactive processing from a workstation application are added to the queue with the highest priority, whereas requests for batch processing are added at a low priority. The algorithm service takes the top-most item from the queue and passes the request to the algorithms which it hosts, and when that processing is completed, it sends a message to one or more output queues. | 06-25-2009 |
20090172681 | SYSTEMS, METHODS AND APPARATUSES FOR CLOCK ENABLE (CKE) COORDINATION - Embodiments of the invention are generally directed to systems, methods, and apparatuses for clock enable (CKE) coordination. In some embodiments, a memory controller includes logic to predict whether a scheduled request will be issued to a rank. The memory controller may also include logic to predict whether a scheduled request will not be issued to the rank. In some embodiments, the clock enable (CKE) is asserted or de-asserted to a rank based, at least in part, on the predictions. Other embodiments are described and claimed. | 07-02-2009 |
20090172682 | SERIALIZATION IN COMPUTER MANAGEMENT - Processes are programmatically categorized into a plurality of categories, which are prioritized. Serialization is used to control execution of the processes of the various categories. The serialization ensures that processes of higher priority categories are given priority in execution. This includes temporarily preventing processes of lower priority categories from being executed. | 07-02-2009 |
20090172683 | MULTICORE INTERFACE WITH DYNAMIC TASK MANAGEMENT CAPABILITY AND TASK LOADING AND OFFLOADING METHOD THEREOF - A multicore interface with dynamic task management capability and a task loading and offloading method thereof are provided. The method disposes a communication interface between a micro processor unit (MPU) and a digital signal processor (DSP) and dynamically manages tasks assigned by the MPU to the DSP. First, an idle processing unit of the DSP is searched, and then one of a plurality of threads of the task is assigned to the processing unit. Finally, the processing unit is activated to execute the thread. Accordingly, the communication efficiency of the multicore processor can be effectively increased while the hardware cost can be saved. | 07-02-2009 |
20090172684 | SMALL LOW POWER EMBEDDED SYSTEM AND PREEMPTION AVOIDANCE METHOD THEREOF - Provided are a small low power embedded system and a preemption avoidance method thereof. A method for avoiding preemption in a small low power embedded system includes fetching and running a periodic atomic task from a periodic run queue, reducing any one of periodic atomic tasks or performing the change of a task after changing a field of the run periodic atomic task into a run standby state, according to a result value of the run of the periodic atomic task, fetching a sporadic atomic task from a sporadic run queue, and acquiring a system clock, running the fetched sporadic atomic task according to run time in the worst condition, and reducing any one of sporadic atomic tasks or performing the change of an event after a field of the run sporadic atomic task into a run standby state, according to a result value of the run of the sporadic atomic task. | 07-02-2009 |
20090172685 | SYSTEM AND METHOD FOR IMPROVED SCHEDULING OF CONTENT TRANSCODING - A method and system for improved scheduling of content transcoding is disclosed. Embodiments are capable of generating and assigning a first transcoding priority value to a piece of content, where the first transcoding priority value is based upon information about the content and at least one semi-static constraint. A second transcoding priority value may also be generated and assigned based upon the first transcoding priority value and at least one dynamic constraint. Transcoding of the content may be scheduled using the first and/or second transcoding priority values, thereby providing scheduling of content transcoding which takes into account longer-term knowledge and/or shorter-term knowledge for better assessment of the demand for transcoding of a given piece of content. Accordingly, embodiments enable transcoding of content with reduced resource load, reduced transcoding cost, and improved quality of service. | 07-02-2009 |
20090172686 | METHOD FOR MANAGING THREAD GROUP OF PROCESS - A method for managing a thread group of a process is provided. First, a group scheduling module is used to receive an execution permission request from a first thread. When detecting that a second thread in the thread group is under execution, the group scheduling module stops the first thread, and does not assign the execution permission to the first thread until the second thread is completed, and till then, the first thread retrieves a required shared resource and executes the computations. Then, the first thread releases the shared resource when completing the computations. Then, the group scheduling module retrieves a third thread with the highest priority in a waiting queue and repeats the above process until all the threads are completed. Through this method, when one thread executes a call back function, the other threads are prevented from taking this chance to use the resource required by the thread. | 07-02-2009 |
20090178045 | Scheduling Memory Usage Of A Workload - Described herein is a method for scheduling memory usage of a workload, the method comprising: receiving the workload, wherein the workload includes a plurality of jobs; determining a memory requirement to execute each of the plurality of jobs; arranging the plurality of jobs in an order of the memory requirements of the plurality of jobs such that the job with the largest memory requirement is at one end of the order and the job with the smallest memory requirement is at the other end of the order; assigning in order a unique priority to each of the plurality of jobs in accordance with the arranged order such that the job with the largest memory requirement is assigned the highest priority for execution and the job with the smallest memory requirement is assigned the lowest priority for execution; and executing the workload by concurrently executing the jobs in the workload in accordance with the arranged order of the plurality of jobs and the unique priority assigned to each of the plurality of jobs. | 07-09-2009 |
20090187912 | Method and apparatus for migrating task in multi-processor system - A method and apparatus for migrating a task in a multi-processor system. The method includes examining whether a second process has been allocated to a second processor, the second process having a same instruction to execute as a first process and having different data to process in response to the instruction from the first process, the instruction being to execute the task; selecting a method of migrating the first process or a method of migrating a thread included in the first process based on the examining and migrating the task from a first processor to the second processor using the selected method. Therefore, cost and power required for task migration can be minimized. Consequently, power consumption can be maintained in a low-power environment, such as an embedded system, which, in turn, optimizes the performance of the multi-processor system and prevents physical damage to the circuit of the multi-processor system. | 07-23-2009 |
20090187913 | ORDERING MULTIPLE RESOURCES - A method of ordering multiple resources in a transaction comprising receiving a transaction for a plurality of resources; determining, for each resource, the work embodied by the transaction; ordering the resources according to the determination of the work; committing the transaction; and invoking the resources in the selected order. The step of ordering the resources may comprise specifying the resource to be invoked last. Alternatively, or additionally, the step of ordering the resources may also comprise specifying that each resource carrying out read-only work is to be invoked first. | 07-23-2009 |
20090187914 | SYSTEM AND METHOD FOR LOAD SHEDDING IN DATA MINING AND KNOWLEDGE DISCOVERY FROM STREAM DATA - Load shedding schemes for mining data streams. A scoring function is used to rank the importance of stream elements, and those elements with high importance are investigated. In the context of not knowing the exact feature values of a data stream, the use of a Markov model is proposed herein for predicting the feature distribution of a data stream. Based on the predicted feature distribution, one can make classification decisions to maximize the expected benefits. In addition, there is proposed herein the employment of a quality of decision (QoD) metric to measure the level of uncertainty in decisions and to guide load shedding. A load shedding scheme such as presented herein assigns available resources to multiple data streams to maximize the quality of classification decisions. Furthermore, such a load shedding scheme is able to learn and adapt to changing data characteristics in the data streams. | 07-23-2009 |
20090193424 | METHOD OF PROCESSING INSTRUCTIONS IN PIPELINE-BASED PROCESSOR AND CORRESPONDING PROCESSOR - The present invention discloses a method of processing instructions in a pipeline-based central processing unit, wherein the pipeline is partitioned into base pipeline stages and enhanced pipeline stages according to functions, the base pipeline stages being activated all the while, and the enhanced pipeline stages being activated or shutdown according to requirements for performance of a workload. The present invention further discloses a method of processing instructions in a pipeline-based central processing unit, wherein the pipeline is partitioned into base pipeline stages and enhanced pipeline stages according to functions, each pipeline stage being partitioned into a base module and at least one enhanced module, the base module being activated all the while, and the enhanced module being activated or shutdown according to requirements for performance of a workload. | 07-30-2009 |
20090199191 | Notification to Task of Completion of GSM Operations by Initiator Node - In a global shared memory (GSM) environment, a method provides local notification of completion of a global shared memory (GSM) operation processed by a first task executing at a local node of the distributed system. The system includes multiple nodes on which different tasks of a single job execute and perform GSM operations that are received from a second task via a via host fabric interface (HFI) and associated HFR window assigned to the first tasks. The local task initiates execution of a GSM operation on the local node. The task then monitors for and detects a completion of the execution of the GSM operation on the local node. When the task detects completion of the execution of the GSM operation, the task issues an internal notification to inform the locally-executing tasks of the completion of the GSM operation. | 08-06-2009 |
20090210879 | METHOD FOR DISTRIBUTING COMPUTING TIME IN A COMPUTER SYSTEM - The invention relates to a method for distributing computing time in a computer system on which run a number of partial processes or threads to which an assignment process or scheduler assigns computing time as required, priorities being associated with individual threads and the assignment of computing time being carried out according to the respective priorities. According to said method, the individual threads are respectively associated with a number of time priority levels. A first time priority level contains threads to which computing time is assigned as required at any time. A first scheduler respectively allocates a time slice to the individual time priority levels, and respectively activates one of the time priority levels for the duration of the time slice thereof. A second scheduler monitors the threads of the first time priority level and the threads of the respectively activated time priority level, and assigns computing time to said threads according to the priorities thereof. | 08-20-2009 |
20090217278 | PROJECT MANAGEMENT SYSTEM - A method and apparatus for managing a project are described. According to one embodiment, the method includes the steps of ranking the plurality of tasks to produce a first list; assigning a task cost to each of the plurality of tasks; setting a planned velocity, the planned velocity determining the rate at which task costs are planned to be completed per time segment; and dynamically assigning each of the plurality of tasks to one of the sequence of time segments in the order indicated by the first list based on the planned velocity. In other embodiments, the apparatus includes a machine-readable medium that provides instructions for a processor, which when executed by the processor cause the processor to perform a method of the present invention. | 08-27-2009 |
20090222830 | Methods for Multi-Tasking on Media Players - This invention provides a method for multi-tasking on a media player in a time-slice-circular manner. The method comprises the step of: dividing each of different functions of the media player to a plurality of tasks by a controller unit; setting a priority to each of the tasks by the controller unit; checking the priority of said each of the tasks, and changing a state of a task from “READY” to “EXECUTING” according to the priority of the task by the controller unit; and executing the tasks alternately by using time slices associated therewith by the controller unit. Since all the tasks are executed within a short time, from the user's point of view, all the tasks are executed simultaneously. Thus, multi-tasking on the media player is achieved. | 09-03-2009 |
20090222831 | Scheduling network distributed jobs - A method and apparatus for scheduling processing jobs is described. In one embodiment, a scheduler receives a request to process one or more computation jobs. The scheduler generates a size metric corresponding to a size of an executable image of each computation job and a corresponding data set associated with each computation job. The scheduler adjusts a priority of each computation job based on a system configuration setting and schedules the process of each computation job according to the priority of each computation job. In another embodiment, the scheduler distributes the plurality of computation jobs on one or more processor of a computing system, where the system configuration setting prioritizes a computation job with a smaller size metric than a computation job with a larger size metric. In another embodiment, the scheduler distributes the computation jobs across a network of computing systems with one or more computation jobs distributed over one or more computing systems, where the system configuration setting prioritizes a computation job with a smaller size metric than a computation job with a larger size metric. | 09-03-2009 |
20090235264 | DISTRIBUTED SYSTEM - The allocation of hardware resources to distribution applications is enabled without using effective task priority available only in field devices. A distribution system makes a plurality of field devices connected with each other through a network (N) operate a plurality of distribution applications (distribution AP) in parallel. The distribution system is provided with an importance adjustment unit ( | 09-17-2009 |
20090241119 | Interrupt and Exception Handling for Multi-Streaming Digital Processors - A multi-streaming processor has a plurality of streams for streaming one or more instruction threads, a set of functional resources for processing instructions from streams, and interrupt handler logic. The logic detects and maps interrupts and exceptions to one or more specific streams. In some embodiments, one interrupt or exception may be mapped to two or more streams, and in others two or more interrupts or exceptions may be mapped to one stream. Mapping may be static and determined at processor design, programmable, with data stored and amendable, or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping. Interrupts may be external interrupts generated by devices external to the processor software (internal) interrupts generated by active streams, or conditional, based on variables. After interrupts are acknowledged, streams to which interrupts or exceptions are mapped are vectored to appropriate service routines. In a synchronous method, no vectoring occurs until all streams to which an interrupt is mapped acknowledge the interrupt. | 09-24-2009 |
20090241120 | SYSTEM AND METHOD FOR CONTROLLING PRIORITY IN SCA MULTI-COMPONENT AND MULTI-PORT ENVIRONMENT - A system for controlling priority in a SCA-based application having a plurality of components wherein each of the components has a plurality of ports, includes: a priority component scheduler, interworking with the plurality of components wherein component priority order of the components is arranged therein; and a priority port scheduler that is provided in each of the components including the plurality of the ports which are associated with connections between the components, wherein port priority order of the ports included in each of the components is arranged therein. The priority component scheduler may be generated by using domain profiles in which component priority values of the components are set and the priority port scheduler may be generated by using domain profiles in which port priority values of the ports included in each of the components are set. Further, the domain profiles may be XML files. | 09-24-2009 |
20090249349 | Power-Efficient Thread Priority Enablement - A mechanism for controlling instruction fetch and dispatch thread priority settings in a thread switch control register for reducing the occurrence of balance flushes and dispatch flushes for increased power performance of a simultaneous multi-threading data processing system. To achieve a target power efficiency mode of a processor, the illustrative embodiments receive an instruction or command from a higher-level system control to set a current power consumption of the processor. The illustrative embodiments determine a target power efficiency mode for the processor. Once the target power mode is determined, the illustrative embodiments update thread priority settings in a thread switch control register for an executing thread to control balance flush speculation and dispatch flush speculation to achieve the target power efficiency mode. | 10-01-2009 |
20090254913 | Information Processing System - An information processing system is provided to alleviate excessive load on a master node, thereby allowing the master node to efficiently perform the process of assigning jobs to nodes. A client | 10-08-2009 |
20090254914 | OPTIMIZED USAGE OF COLLECTOR RESOURCES FOR PERFORMANCE DATA COLLECTION THROUGH EVEN TASK ASSIGNMENT - A method of balancing computer resources on a network of computers is provided employing a two-tier network architecture of at least one High Level Collector as a scheduler/load balancing server, and a plurality Low level Collectors which gather task data and execute instructions. Tasks are assigned priority and weight scores and sorted prior to assignment to Low Level Collectors. Also provided is a computer readable medium including instructions, wherein execution of the instructions by at least one computing device balances computer resources on a network of computers. | 10-08-2009 |
20090254915 | SYSTEM AND METHOD FOR PROVIDING FAULT RESILIENT PROCESSING IN AN IMPLANTABLE MEDICAL DEVICE - A system and method for providing fault resilient processing in an implantable medical device is provided. A processor and memory store are provided in an implantable medical device. Separate times on the processor are scheduled to a plurality of processes. Separate memory spaces in the memory store are managed by exclusively associating one such separate memory space with each of the processes. Data is selectively validated prior to exchange from one of the processes to another of the processes during execution in the separate processor times. | 10-08-2009 |
20090260013 | Computer Processors With Plural, Pipelined Hardware Threads Of Execution - Computer processors and methods of operation of computer processors that include a plurality of pipelined hardware threads of execution, each thread including a plurality of computer program instructions; an instruction decoder that determines dependencies and latencies among instructions of a thread; and an instruction dispatcher that arbitrates, in the presence of resource contention and in accordance with the dependencies and latencies, priorities for dispatch of instructions from the plurality of threads of execution. | 10-15-2009 |
20090265712 | Auto-Configuring Workload Management System - A multi-partition computer system provides a configuration inspector for inspecting partitions to determine their identities and configuration information. The system also includes a policy controller for automatically setting said workload-management policies at least in part as a function of said configuration information in response to a command. | 10-22-2009 |
20090271792 | METHOD AND APPARATUS FOR ALERT PRIORITIZATION ON HIGH VALUE END POINTS - A method and system for prioritizing alerts on end points include an aggregator agent that monitors a plurality of end point agents and receives a signal indicating an out of band operating tolerance from an end point. The aggregator agent locally determines the priority of the received signal based on a rules engine local to the aggregator agent. The aggregator agent transmits the priority of said signal and information associated with said signal to a remote host computer for appropriate handling. | 10-29-2009 |
20090271793 | Mechanism for priority inheritance for read/write locks - In one embodiment, a mechanism for priority inheritance for read/write locks (RW locks) is disclosed. In one embodiment, a method includes setting a maximum number of read/write locks (RW locks) allowed to be held for read by one or more tasks, maintaining an array in each of the one or more tasks to track the RW locks held for read, linking a RW lock with the array of each of the tasks that own the RW lock, and boosting a priority of each of the tasks that own the RW lock according to a priority inheritance algorithm implemented by the RW lock. | 10-29-2009 |
20090271794 | Global avoidance of hang states in multi-node computing system - Systems, methods, and other embodiments associated with avoiding resource blockages and hang states are described. One example computer-implemented method for a clustered computing system includes determining that a first process is waiting for a resource and is in a blocked state. The resource that the first process is waiting for is identified. A blocking process that is holding the resource is then identified. A priority of the blocking process is compared with a priority the first process. If the priority of the blocking process is lower than the priority of the first process, the priority of the blocking process is increase. In this manner the blocking process can be scheduled for execution sooner and thus release the resource. | 10-29-2009 |
20090271795 | Method and apparatus for scheduling the processing of commands for execution by cryptographic algorithm cores in a programmable network processor - A method and apparatus for scheduling the processing of commands by a plurality of cryptographic algorithm cores in a network processor. | 10-29-2009 |
20090271796 | INFORMATION PROCESSING SYSTEM AND TASK EXECUTION CONTROL METHOD - An information processing system includes a master processor and a slave processor. The master processor operates in a multitasking environment capable of executing request source tasks for making processing requests to the slave processor in parallel by task scheduling based on execution priorities of the tasks. The slave processor operates in a multitasking environment capable of executing a communication processing task and child tasks created by the communication processing task for executing processing requested by the processing requests in parallel by task scheduling. The processing requests contain priority information associated with the execution priorities of the request source tasks in the master processor. The slave processor activates the communication processing task in common for the processing requests from the different request source tasks. The communication processing task creates the child tasks with execution priorities allocated corresponding to the execution priorities of the request source tasks based on the priority information. | 10-29-2009 |
20090276781 | SYSTEM AND METHOD FOR MULTI-LEVEL PREEMPTION SCHEDULING IN HIGH PERFORMANCE PROCESSING - A computing system configured to handle preemption events in an environment having jobs with high and low priorities. The system includes a job queue configured to receive job requests from users, the job queue storing the jobs in an order based on the priority of the jobs, and indicating whether a job is a high priority job or a low priority job. The system also includes a plurality of node clusters, each node cluster including a plurality of nodes and a scheduler coupled to the job queue and to the plurality of node clusters and configured to assign jobs from the job queue to the plurality of node clusters. The scheduler is configured to preempt a first low priority job running in a first node cluster with a high priority job that appears in the job queue after the low priority job has started and, in the event that a second low priority job from the job queue may run on a portion of the plurality of nodes in the first node cluster during a remaining processing time for the high priority job, backfill the second low priority job into the portion of the plurality of nodes and, in the event a second high priority job is received in the job queue and may run on the portion of the plurality of nodes, return the second low priority job to the job queue. | 11-05-2009 |
20090276782 | RESOURCE MANAGEMENT METHODS AND SYSTEMS - Resource management methods and systems are provided. First, it is determined whether a resource is currently being used. When the resource is currently being used by a first program, a release notification is transmitted to the first program to release the resource. | 11-05-2009 |
20090282414 | Prioritized Resource Access Management - Middleware may dynamically restrict or otherwise allocate computer resources in response to changing demand and based on prioritized user access levels. Users associated with a relatively low priority may have their resource access delayed in response to high demand, e.g., processor usage. Users having a higher priority may experience uninterrupted access during the same period and until demand subsides. | 11-12-2009 |
20090288089 | METHOD FOR PRIORITIZED EVENT PROCESSING IN AN EVENT DISPATCHING SYSTEM - A method for dynamically prioritizing event processing in an event dispatching system includes steps of: organizing input/output requests in a plurality of activity sets ordered from most active to least active, wherein a highest priority level is associated with the most active activity set and the lowest priority level is associated with the least active activity set; organizing event descriptors corresponding to the input/output requests into event descriptor sets; creating an event descriptor cache; duplicating the event descriptor of the input/output request found to be most active into the event descriptor cache; monitoring the event descriptor cache more frequently than the event descriptor set; and invoking an event dispatching routine from the event descriptor cache. | 11-19-2009 |
20090288090 | PRIORITY CONTROL PROGRAM, APPARATUS AND METHOD - A disclosed priority control program recorded in a computer-readable medium causes a computer to execute, in job allocation for computational resources, a first step of lowering a job allocation priority of a user based on an estimated utilization amount of a job associated with the user, the job allocation priority indicating a degree of priority of the user in obtaining an allocation of the computational resource, and the estimated utilization amount being an amount of the computational resources estimated to be used for the job and being submitted to and recorded in a memory device on a job-to-job basis; and a second step of increasing the job allocation priority over time at a restoration rate which corresponds to a user-specific amount of the computational resources available for the user per unit time, the user-specific amount being recorded in the memory device on a user-to-user basis. | 11-19-2009 |
20090293061 | Structural Power Reduction in Multithreaded Processor - A circuit arrangement and method utilize a plurality of execution units having different power and performance characteristics and capabilities within a multithreaded processor core, and selectively route instructions having different performance requirements to different execution units based upon those performance requirements. As such, instructions that have high performance requirements, such as instructions associated with primary tasks or time sensitive tasks, can be routed to a higher performance execution unit to maximize performance when executing those instructions, while instructions that have low performance requirements, such as instructions associated with background tasks or non-time sensitive tasks, can be routed to a reduced power execution unit to reduce the power consumption (and associated heat generation) associated with executing those instructions. | 11-26-2009 |
20090300631 | DATA PROCESSING SYSTEM AND METHOD FOR CACHE REPLACEMENT - A data processing system is provided with at least one processing unit ( | 12-03-2009 |
20090300632 | WORK REQUEST CONTROL SYSTEM - A work request control system for receiving work requests from input devices provides a priority queuing mechanism for performance of tasks by a finite pool of heterogeneous resources. An input receives work requests from input devices and an attribute mechanism receives the work requests and determines the values of each of multiple attributes for each work request. A queue mechanism calculates using the multiple attributes and by considering each request as a multi-dimensional eigenvector the relative distance of each eigenvector in relation to a reference eigenvector and asserts the work requests in a priority order determined by the relative distance of each eigenvector. | 12-03-2009 |
20090300633 | Method and System for Scheduling and Controlling Backups in a Computer System - A method, system, and article to manage a backup procedure of one or more backup tasks in a computing system. A backup window within which the backup tasks are to be executed is defined, and the backup tasks within the backup window are scheduled. The process of the backup procedure is controlled during execution. The process of controlling the backup procedure includes calculating the prospective duration of all actually running and all future backup tasks, and cancelling low priority backup tasks in case a higher priority backup task is projected to continue beyond an end time (T | 12-03-2009 |
20090307700 | Multithreaded processor and a mechanism and a method for executing one hard real-time task in a multithreaded processor - The invention relates to a mechanism for executing one Hard Real-Time (HRT) task in a multithreaded processor comprising means for determining the slack time of the HRT task; means for starting the execution of the HRT task; means for verifying if the HRT task requires using a resource that is being used by at least one Non Hard Real-Time (NHRT) task; means for determining the delay caused by the NHRT task; means for subtracting the determined delay from the slack time of the HRT task; means for verifying if the new value of the slack time is lower than a critical threshold; and means for stopping the NHRT tasks. | 12-10-2009 |
20090313631 | AUTONOMIC WORKLOAD PLANNING - A method of automatically optimizing workload scheduling. Target values for workload characteristics and constraint specifications are received. Generation of a first execution plan is initiated. Initial constraint values conforming to the constraint specifications are selected. Each constraint value constrains tasks included in the workload. The first execution plan is executed, thereby determining measurements of workload characteristics. Contributions indicating differences between workload characteristic measurements and target values are determined and stored. Generation of a next execution plan is initiated. Modified constraint values conforming to the constraint specifications are selected. Changes in the workload characteristics based on the modified constraint values are evaluated. An optimal or acceptable sub-optimal solution in a space of solutions defined by the constraint specifications is determined, resulting in new values for the constraints. After replacing the initial values with the new values, the next execution plan is generated and executed. | 12-17-2009 |
20090320032 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR PREVENTING STARVATIONS OF TASKS IN A MULTIPLE PROCESSING ENTITY SYSTEM - A system, computer program and a method for preventing starvations of tasks in a multiple-processing entity system, the method includes: examining, during each scheduling iteration, an eligibility of each task data structure out of a group of data structures to be moved from a sorted tasks queue to a ready for execution task; updating a value, during each scheduling iteration, of a queue starvation watermark value of each task data structure that is not eligible to move to a running tasks queue, until a queue starvation watermark value of a certain task data structure out of the group reaches a queue starvation watermark threshold; and generating a task starvation indication if during an additional number of scheduling iterations, the certain task data structure is still prevented from being moved to a running tasks queue, wherein the additional number is responsive to a task starvation watermark. | 12-24-2009 |
20090320033 | DATA STORAGE RESOURCE ALLOCATION BY EMPLOYING DYNAMIC METHODS AND BLACKLISTING RESOURCE REQUEST POOLS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan. | 12-24-2009 |
20090320034 | DATA PROCESSING APPARATUS - A data processing apparatus has a memory element array ( | 12-24-2009 |
20100005470 | METHOD AND SYSTEM FOR PERFORMING DMA IN A MULTI-CORE SYSTEM-ON-CHIP USING DEADLINE-BASED SCHEDULING - A direct memory access (DMA) engine schedules data transfer requests of a system-on-chip data processing system according to both an assigned transfer priority and the deadline for completing a transfer. Transfer priority is based on a hardness representing the penalty for missing a deadline. Priorities are also assigned to zero-deadline transfer requests in which there is a penalty no matter how early the transfer completes. If desired, transfer requests may be scheduled in timeslices according to priority in order to bound the latency of lower priority requests, with the highest priority hard real-time transfers wherein the penalty for missing a deadline is severe are given the largest timeslice. Service requests for preparing a next data transfer are posted while a current transaction is in progress for maximum efficiency. Current transfers may be preempted whenever a higher urgency request is received. | 01-07-2010 |
20100011363 | CONTROLLING A COMPUTER SYSTEM HAVING A PROCESSOR INCLUDING A PLURALITY OF CORES - Controlling a computer system having at least one processor including a plurality of cores includes establishing a core max value that sets a maximum number of the plurality of cores operating at a predetermined time period based on an operating condition, determining a core run value that is associated with a number of the plurality of cores of the at least one processor operating at the predetermined time period, and stopping at least one of the plurality of cores in the event the core run value exceeds the core max value at the predetermined time period. | 01-14-2010 |
20100017806 | FINE GRAIN OS SCHEDULING - The invention relates to a method of enabling multiple operating systems to run concurrently on the same computer, the method comprising: scheduling a plurality of tasks for execution by at least first and second operating systems, wherein each task has one of a plurality of priorities; setting the priority of each operating system in accordance with the priority of the next task scheduled for execution by the respective operating system; and providing a common program arranged to compare the priorities of all operating systems and to pass control to the operating system having the highest priority. Accordingly, the invention resides in the idea that different operating systems can be run more efficiently on a single CPU by changing the priority of each operating system over time. In other words, each operating system has a flexible priority. | 01-21-2010 |
20100037228 | THREAD CONTROLLER FOR SIMULTANEOUS PROCESS OF DATA TRANSFER SESSIONS IN A PERSONAL TOKEN - The invention relates to a personal token running a series of applications, wherein said personal token includes a thread controller which transmits data from the applications to an external device in a cyclic way, a cycle being constituted of a series of data transfers from the applications and to the external device, a cycle comprising a respective number of data transfers dedicated to each respective application which is different according to the respective application, the number of data transfers for a respective application in a cycle corresponding to a priority level of the application as taken into account by the thread controller. | 02-11-2010 |
20100037229 | Method and Device for Determining a Target State - In a method for determining a target state in a system having multiple components, system states of different priorities being selectable in the system as a function of an availability of the components, the following steps are provided: ascertaining whether a highest-priority system state is selectable; determining the highest-priority system state as the target state if the highest-priority system state is selectable; and ascertaining whether a next-higher-priority system state is selectable if the highest-priority system state is not selectable, and determining the next-higher-priority system state as the target state if said state is selectable. | 02-11-2010 |
20100037230 | METHOD FOR EXECUTING A PROGRAM RELATING TO SEVERAL SERVICES, AND THE CORRESPONDING ELECTRONIC SYSTEM AND DEVICE - The invention relates to a method for executing at least one program pertaining to at least one service included in a device having at least one memory space intended to be allocated for executing at least one of the services, and at least two access points for accessing services accessible from a network external to the device. The device associates a centralizing service with at least two access points and allocates a memory space to a service for receiving a request to connect to one of the services. The centralizing service is executed, making it possible to await reception of a connection request. In the absence thereof, only the centralizing service has the use of an allocated memory space. The invention also relates to a corresponding electronic device and system. | 02-11-2010 |
20100043003 | SPEEDY EVENT PROCESSING - A method for event positioning includes categorizing events into event groups based on a priority level, buffering the events in each event group into a group event queue, and determining an optimized position for events within each queue based, at least in part, on a processing time and an expected response time for each event in the group event queue. | 02-18-2010 |
20100043004 | METHOD AND SYSTEM FOR COMPUTER SYSTEM DIAGNOSTIC SCHEDULING USING SERVICE LEVEL OBJECTIVES - A system and method for automatically scheduling health diagnostics within a computer system is disclosed. In one embodiment, a method for automatically scheduling health diagnostics within a computer system using service level objectives (SLOs) includes reviewing the SLOs associated with each managed server, invoking each managed server for diagnosing computer system based on the associated SLOs, receiving diagnostic status data and computer system health data from each managed server, and analyzing the received diagnostic status data and computer system health data and implementing any needed one or more corrective actions based on the analysis and a predetermined configuration corrective action criteria. | 02-18-2010 |
20100050178 | METHOD AND APPARATUS TO IMPLEMENT SOFTWARE TO HARDWARE THREAD PRIORITY - The invention relates to a method and apparatus for execution scheduling of a program thread of an application program and executing the scheduled program thread on a data processing system. The method includes: providing an application program thread priority to a thread execution scheduler; selecting for execution the program thread from a plurality of program threads inserted into the thread execution queue, wherein the program thread is selected for execution using a round-robin selection scheme, and wherein the round-robin selection scheme selects the program thread based on an execution priority associated with the program thread bit; placing the program thread in a data processing execution queue within the data processing system; and removing the program thread from the thread execution queue after a successful execution of the program thread by the data processing system. | 02-25-2010 |
20100064290 | COMPUTER-READABLE RECORDING MEDIUM STORING A CONTROL PROGRAM, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD - A computer-readable recording medium stores a control program that causes a computer to execute a process that includes: an obtaining procedure for obtaining work procedure manual information about a plurality of ordered works and one or more unordered works associated with a range of a predetermined order; an input step of receiving an input; a recognizing procedure for recognizing whether the first work matches a second work that is initially-ordered in unexecuted ordered works among the plurality of ordered works or a third work associated with a range including the order of the second work among the one or more unordered works; and a control procedure for allowing execution of the first work if the first work matches the second work or the third work and denying execution of the first work if the first work does not match any of the second and third works. | 03-11-2010 |
20100070977 | CONTROL OF THE RUNTIME BEHAVIOR OF PROCESSES - A method for controlling runtime behavior of processes of an automation system is provided. A priority is assigned to each of the processes, wherein an operating system of the automation system assigns runtime to the processes as a function of their priority. A scheduling service monitors starting and ending of all processes, wherein the highest priority available in the operating system is assigned to the scheduling service. Metadata is assigned to at least one process, the data including at least one rule on the priority of the process. The scheduling service analyzes the metadata and registers the process for monitoring when starting a process to which metadata is assigned, wherein the scheduling service monitors the registered processes for compliance with the at least one rule per process, and wherein the scheduling service modifies the priorities of the registered processes, the at least one rule of which is in non-compliance, according to the rule. | 03-18-2010 |
20100077399 | Methods and Systems for Allocating Interrupts In A Multithreaded Processor - A multithreaded processor capable of allocating interrupts is described. In one embodiment, the multithreaded processor includes an interrupt module and threads for executing tasks. The interrupt module can identify a priority for each thread based on a task priority for tasks being executed by the threads and assign an interrupt to a thread based at least on its priority. | 03-25-2010 |
20100083264 | Processing Batch Database Workload While Avoiding Overload - Processing batch database workload while avoiding overload. A method for efficiently processing a database workload in a computer system comprises receiving the workload, which comprises a batch of queries directed toward the database. Each query within the batch of queries is assigned a priority. Resources of the computer system are assigned in accordance with the priority. The batch of queries is executed in unison within the computer system in accordance with the priority of each query thereby resolving a conflict within the batch of queries for the resources of the computer system, hence efficiently processing the database workload and avoiding overload of the computer system. | 04-01-2010 |
20100083265 | SYSTEMS AND METHODS FOR SCHEDULING ASYNCHRONOUS TASKS TO RESIDUAL CHANNEL SPACE | 04-01-2010 |
20100083266 | METHOD AND APPARATUS FOR ACCESSING A SHARED DATA STRUCTURE IN PARALLEL BY MULTIPLE THREADS - A method of accessing a shared data structure in parallel by multiple threads in a parallel application program is disclosed, in which a lock of the shared data structure is granted to one thread of the multiple threads, an operation of the thread which acquires the lock is performed on the shared data structure, then an operation of each thread of the multiple threads which does not acquire the lock is buffered, and finally the buffered operations are performed on the shared data structure when another thread of the multiple threads subsequently acquires the lock. By using this method, the operations of other threads which do not acquire the lock of the shared data structure can be buffered automatically when the shared data structure is locked by one thread, and all the buffered operations can be performed when another thread acquires the lock. Therefore when the shared data structure is modified, the occurences of an element shift in the shared data structure can be greatly reduced and the access performance of the multiple threads can be improved. A corresponding apparatus and program product are also disclosed. | 04-01-2010 |
20100083267 | Multi-thread processor and its hardware thread scheduling method - A multi-thread processor in accordance with an exemplary aspect of the present invention includes a plurality of hardware threads each of which generates an independent instruction flow, a first thread scheduler that outputs a first thread selection signal designating a hardware thread to be executed in the next execution cycle, a first selector that outputs an instruction generated by the selected hardware thread according to the first thread selection signal, and an execution pipeline that executes an instruction output from the first selector, wherein whenever a hardware thread is executed in the execution pipeline, the first thread scheduler updates the priority rank of the executed hardware thread and outputs the first thread selection signal in accordance with the updated priority rank. | 04-01-2010 |
20100088706 | User Tolerance Based Scheduling Method for Aperiodic Real-Time Tasks - An apparatus comprising at least one processor configured to implement a method comprising analyzing a plurality of tasks, determining a privilege level for each of the task, determining a schedule for each of the tasks, and scheduling the tasks for execution based on the privilege level and the schedule of each task. Included is a memory comprising instructions for determining a privilege level for each of a plurality of tasks, wherein the privilege levels comprise periodic real-time, aperiodic real-time, and non-real time, determining an execution time for each of the tasks, and scheduling the tasks for execution on a processor based on the privilege level and the execution time of each task. | 04-08-2010 |
20100095299 | MIXED WORKLOAD SCHEDULER - A mixed workload scheduler and operating method efficiently handle diverse queries ranging from short less-intensive queries to long resource-intensive queries. A scheduler is configured for scheduling mixed workloads and comprises an analyzer and a schedule controller. The analyzer detects execution time and wait time of a plurality of queries and balances average stretch and maximum stretch of scheduled queries wherein query stretch is defined as a ratio of a sum of wait time and execution time to execution time of a query. The schedule controller modifies scheduling of queries according to service level differentiation. | 04-15-2010 |
20100100883 | SYSTEM AND METHOD FOR SCHEDULING TASKS IN PROCESSING FRAMES - Methods and systems for implementing methods for allocating available service capacity to a plurality of tasks in a data processing system having a plurality of processing channels is provided, where each processing channel is utilized in accordance with a time division multiplex processing scheme. A method can include receiving in the data processing system the plurality of tasks to be allocated to the available service capacity and determining a task from among an unassigned set of the plurality of tasks having a requirement for available service capacity which is greatest. The method can also include identifying at least one of the plurality of processing channels that has an available service capacity greater than or equal to the requirement and selectively assigning the task to the processing channel having a remaining service capacity which least exceeds the requirement. | 04-22-2010 |
20100107168 | Scheduling for Real-Time Garbage Collection - Techniques are disclosed for schedule management. By way of example, a method for managing performance of tasks in threads associated with at least one processor comprises the following steps. One or more units of a first task type are executed. A count of the one or more units of the first task type executed is maintained. The count represents one or more credits accumulated by the processor for executing the one or more units of a first task type. One or more units of a second task type are executed. During execution of the one or more units of a second task type, a request to execute at least one further unit of the first task type is received. The amount of credits in the count is checked. When it is determined that there is sufficient credit in the count, the request to execute the at least one further unit of the first task type is forgone, and execution of the one or more units of the second task type continues. When it is determined that there is insufficient credit in the count, the at least one further unit of the first task type is executed. The first task type may be an overhead task type such as a garbage collection task type, and the second task type may be an application task type. | 04-29-2010 |
20100107169 | PERIODICAL TASK EXECUTION APPARATUS, PERIODICAL TASK EXECUTION METHOD, AND STORAGE MEDIUM - A periodical task execution apparatus executes one or more periodical tasks to be executed in a predetermined sequence, including a comparison section configured to compare, when an activation request for any one of the one or more periodical tasks is made, priority of a task | 04-29-2010 |
20100107170 | GROUP WORK SORTING IN A PARALLEL COMPUTING SYSTEM - A “group work sorting” technique is used in a parallel computing system that executes multiple items of work across multiple parallel processing units, where each parallel processing unit processes one or more of the work items according to their positions in a prioritized work queue that corresponds to the parallel processing unit. When implementing the technique, one or more of the parallel processing units receives a new work item to be placed into a first work queue that corresponds to the parallel processing unit and receives data that indicates where one or more other parallel processing units would prefer to place the new work item in the prioritized work queues that correspond to the other parallel processing units. The parallel processing unit uses the received data as a guide in placing the new work item into the first work queue. | 04-29-2010 |
20100115522 | MECHANISM TO CONTROL HARDWARE MULTI-THREADED PRIORITY BY SYSTEM CALL - A method, a system and a computer program product for controlling the hardware priority of hardware threads in a data processing system. A Thread Priority Control (TPC) utility assigns a primary level and one or more secondary levels of hardware priority to a hardware thread. When a hardware thread initiates execution in the absence of a system call, the TPC utility enables execution based on the primary level. When the hardware thread initiates execution within a system call, the TPC utility dynamically adjusts execution from the primary level to the secondary level associated with the system call. The TPC utility adjusts hardware priority levels in order to: (a) raise the hardware priority of one hardware thread relative to another; (b) reduce energy consumed by the hardware thread; and (c) fulfill requirements of time critical hardware sections. | 05-06-2010 |
20100115523 | METHOD AND APPARATUS FOR ALLOCATING TASKS AND RESOURCES FOR A PROJECT LIFECYCLE - The present invention relates to the allocation of resources to address scope items against an iteration of a project based on a rule set described by a decision matrix and threshold values. Rather than changing work item start and end dates based on resource availability, the present invention adds, modifies, and removes content from a collection of scope item items and allocates them to resources based on the skills required, the priority, estimated work and target iteration of the scope item items. | 05-06-2010 |
20100115524 | SYSTEM AND METHOD FOR THREAD PROCESSING ROBOT SOFTWARE COMPONENTS - An apparatus for thread processing robot software components includes a data port unit for storing input data in a buffer and then processing the data in a periodic execution mode or in a dedicated execution mode; an event port unit for processing an input event in a passive execution mode; and a method port unit for processing an input method call in the passive execution mode by calling a user-defined method corresponding to the method call. In the periodic execution mode, the data is processed by using an execution thread according to a period of a corresponding component. In the dedicated execution mode, a dedicated thread for the data is created and the data is processed by using the dedicated thread. | 05-06-2010 |
20100115525 | METHOD FOR DYNAMICALLY ENABLING THE EXPANSION OF A COMPUTER OPERATING SYSTEM - A method for scheduling tasks in a computer operating system comprises a background task creating at least one registered service. The background task provides an execution presence and a data present to a registered service and ranks the registered services according to the requirements of each registered service. The background task also allocates an execution presence and a data presence according to each of the registered services such that each of the registered services is given an opportunity to be scheduled in the dedicated pre-assigned time slice. | 05-06-2010 |
20100122260 | Preventing Delay in Execution Time of Instruction Executed by Exclusively Using External Resource - Disclosed are computer systems, a plurality of methods and a computer program for preventing a delay in execution time of one or more instructions. The computer system includes: a lock unit for executing an instruction to acquire exclusive-use of the external resource and an instruction to release the exclusive-use of the external resource in the one or more threads; a counter unit for increasing or decreasing a value of a corresponding one of counters respectively associated with the threads; and a controller for controlling an execution order of the instructions to be executed by exclusively using the external resource and instructions that causes a delay in the execution time of the instructions to be executed by exclusively using the external resource. | 05-13-2010 |
20100125848 | MECHANISMS TO DETECT PRIORITY INVERSION - A method, computer program product, and device are provided for detecting and identifying priority inversion. A higher priority thread and a lower priority thread are received. A debugging application for debugging is executed. The lower priority thread requests and holds a resource. A break point is hit by the lower priority thread. The lower priority thread is preempted by the higher priority thread, and debugging stops until the higher priority thread completes. The higher priority thread requests the resource being held by the lower priority thread. It is determined whether priority inversion occurs. | 05-20-2010 |
20100125849 | Idle Task Monitor - A system and method are provided for determining processor usable idle time in a system employing a software instruction processor. The method establishes an idle task with a lowest processor priority for a processor executing application software instructions, and uses the processor to execute an idle task. The method ceases to execute the idle task in response to the processor executing application software instructions. The amount of periodic idle task execution is determined and stored in a tangible memory medium. For example, idle time amounts can be determined per a unit of time, i.e. a percentage per second. In one aspect, the method generates an idle task report. The report can be a periodic report expressing the duration of idle task execution per time period, or a course of execution report expressing idle task start times, idle task stop times, and durations between the corresponding start and stop times. | 05-20-2010 |
20100125850 | Method and Systems for Processing Critical Control System Functions - A method for processing critical control system functions is described. The method includes determining a level of criticality of at least one data packet and directing critical data packets to at least one of a critical computational job queue and a critical memory portion. The method also includes directing non-critical data packets to at least one of a non-critical computational job queue and a non-critical memory portion and executing control system functions corresponding to critical data packets stored in the critical computational job queue. The method also includes executing control system functions corresponding to non-critical data packets stored in the non-critical computational job queue when no critical control system functions are stored in the critical computational job queue. | 05-20-2010 |
20100131955 | Highly distributed parallel processing on multi-core device - There is provided a highly distributed multi-core system with an adaptive scheduler. By resolving data dependencies in a given list of parallel tasks and selecting a subset of tasks to execute based on provided software priorities, applications can be executed in a highly distributed manner across several types of slave processing cores. Moreover, by overriding provided priorities as necessary to adapt to hardware or other system requirements, the task scheduler may provide for low-level hardware optimizations that enable the timely completion of time-sensitive workloads, which may be of particular interest for real-time applications. Through this modularization of software development and hardware optimization, the conventional demand on application programmers to micromanage multi-core processing for optimal performance is thus avoided, thereby streamlining development and providing a higher quality end product. | 05-27-2010 |
20100146511 | POLICY BASED DATA PROCESSING METHOD AND SYSTEM - Provided is a policy-based data processing system and method. A pattern analyzer of the data processing system generates a pattern handler based on a pattern, schedules the generated pattern handler based on a policy to filter and group data, generates a processing function corresponding to a process type for each data type of event data into an object module, and uses it by handling it through a pattern handler. | 06-10-2010 |
20100146512 | Mechanisms for Priority Control in Resource Allocation - Mechanisms for priority control in resource allocation is provided. With these mechanisms, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource. | 06-10-2010 |
20100162255 | DEVICE FOR RECONFIGURING A TASK PROCESSING CONTEXT - The present invention pertains to the field of onboard flight management systems embedded in aircraft. The invention relates to a reconfiguration device ( | 06-24-2010 |
20100169890 | URGENCY BASED SCHEDULING - The present invention relates to a method of scheduling for multi-function radars. Specifically, the present invention relates to an efficient urgency-based scheduling method. | 07-01-2010 |
20100175067 | METHOD FOR PROCESSING APPLICATION COMMANDS FROM PHYSICAL CHANNELS USING A PORTABLE ELECTRONIC DEVICE AND CORRESPONDING DEVICE AND SYSTEM - The invention relates to a method for processing at least two application commands from at least two physical communication channels respectively using a portable electronic device. The method includes receiving each application command from one of the physical communication channels, determining a priority level associated with each application command, comparing priority levels and identifying the application command with the highest priority among the application commands and processing of the application command with highest priority. The invention also relates to the portable electronic device and an electronic system including a host device cooperating with such a portable electronic device. | 07-08-2010 |
20100180278 | Resource management apparatus and computer program product - Provided is a resource management apparatus for determining allocation of a resource to be consumed or supplied by each of a plurality of applications within a predetermined unit time in a bidding process. The resource management apparatus includes a bid value calculating unit configured to calculate a bid value representing a hypothetical price of the resource, a CPU price adjusting unit configured to adjust the bid value supplied by an application, which has a smaller resource consumption amount than another application, to be greater than the bid value of the another application, and a bid managing unit configured to allocate the resource to each of the plurality of applications taking the adjusted bid value into account. | 07-15-2010 |
20100180279 | FIELD CONTROL DEVICE AND FIELD CONTROL METHOD - A field control device is provided. The field control device includes: a task executing unit configured to selectively and sequentially execute a control task relating to a field control and other tasks in a same control period; and a priority switching unit configured to switch a relative priority of the control task relative to the other tasks in the control period, wherein the priority is a priority of an execution sequence of tasks in the task executing unit. The priority switching unit is configured to: i) set the priority higher than a certain priority, before the control task is started; and ii) set the priority lower than the certain priority, after the control task is ended. | 07-15-2010 |
20100186016 | DYNAMIC PROCESS PRIORITY DETECTION AND MITIGATION - Described herein are techniques for dynamically monitoring and rebalancing priority levels of processes running on a computing node. Runaway processes and starved processes can be proactively detected and prevented, thereby making such a node to perform significantly better and more responsively than otherwise. | 07-22-2010 |
20100192153 | SELECTING EXECUTING REQUESTS TO PREEMPT - Requests that are executing when an application is determined to be in an overload condition are preempted. To select the executing requests to preempt, a value for each executing request is determined. Then, executing requests are selected for preemption based on the values. | 07-29-2010 |
20100192154 | SEPARATION KERNEL WITH MEMORY ALLOCATION, REMOTE PROCEDURE CALL AND EXCEPTION HANDLING MECHANISMS - A computer-implemented system ( | 07-29-2010 |
20100199282 | LOW BURDEN SYSTEM FOR ALLOCATING COMPUTATIONAL RESOURCES IN A REAL TIME CONTROL ENVIRONMENT - A low processing overhead resource manager for a control system uses control system state as a proxy for processing resource capacity, making judgments about execution of asynchronous services based on empirically derived data linked to the states. | 08-05-2010 |
20100199283 | DATA PROCESSING UNIT - When a CPU is processing a first task by using an accelerator for use in image processing, if a request for allocating the accelerator to a process of a second task is issued, the CPU sets an interruption flag when the process of the second task is prioritized over a process of the first task, and the accelerator is allowed to be used for the process of the second task when a state in which the interruption flag is set is detected at a timing predetermined in accordance with a process stage of the accelerator for the first task. Since the timing of detecting the set interruption flag is determined in accordance with a progress state of the process of the task to be interrupted, task switching can be made at a timing of reducing overhead for save and return for the process of the task to be interrupted. | 08-05-2010 |
20100199284 | INFORMATION PROCESSING APPARATUS, SELF-TESTING METHOD, AND STORAGE MEDIUM - An information processing apparatus includes: a storage unit; testing units; read units that respectively read priority information, class information, and progress information from the storage unit; and an assignment unit that assigns an unexecuted testing process to a testing unit according to the read information, and that rewrites the progress information according to assignment of the unexecuted testing process. The testing units executes testing processes of the information processing apparatus. The priority information indicates a priority defined according to dependency among the testing processes in executing the testing processes. The class information associates a class with each testing process and indicates a range of the testing unit(s) to execute the associated testing process. The progress information indicates which testing process is uncompleted. | 08-05-2010 |
20100205607 | METHOD AND SYSTEM FOR SCHEDULING TASKS IN A MULTI PROCESSOR COMPUTING SYSTEM - A multi processor computing system managing tasks based on the health index of the plurality of processors and the priority of tasks to be scheduled. The method comprise receiving the tasks to be scheduled on the computing system; preparing a queue of the tasks based on a scheduling algorithm; computing a health index value for each processor of the computing system; and scheduling the tasks on processors based on the health index value of the processors. A task from a processor with a lower health index may be moved to an available processor with a higher health index. | 08-12-2010 |
20100211954 | PRACTICAL CONTENTION-FREE DISTRIBUTED WEIGHTED FAIR-SHARE SCHEDULER - Embodiments of the invention provide a method, system and computer program product for scheduling tasks in a computer system. In an embodiment, the method comprises receiving a multitude of sets of tasks, and placing the tasks in one or more task queues. The tasks are taken from the one or more task queues and placed in a priority queue according to a first rule. The tasks in the priority queue are assigned to a multitude of working threads according to a second rule based, in part, on share values given to the tasks. In an embodiment, the tasks of each of the sets are placed in a respective one task queue; and all of the tasks in the priority queue from each of the task queues, are assigned as a group to one of the working threads. | 08-19-2010 |
20100211955 | CONTROLLING 32/64-BIT PARALLEL THREAD EXECUTION WITHIN A MICROSOFT OPERATING SYSTEM UTILITY PROGRAM - A method of programming operating system (O/S) utility C and C++ programs within the Microsoft professional development 32/64-bit parallel threads environment, includes providing a computer unit, which can be a 32/64-bit Microsoft PC O/S, or a 32/64-bit Microsoft Server O/S, a Microsoft development tool, which is the Microsoft Visual Studio Development Environment for C and C++ for either the 32-bit O/S or the 64-bit O/S. | 08-19-2010 |
20100218191 | Apparatus and Method for Processing Management Requests - Embodiments of the present invention provide a method of processing a management request, comprising determining a priority level of the management request based upon one or more predetermined priority criteria. In some embodiments, the management requests are based on a Common Information Model (CIM) and control or monitor operation of an entity. | 08-26-2010 |
20100229173 | Managing Latency Introduced by Virtualization - A component manages and minimizes latency introduced by virtualization. The virtualization component determines that a currently scheduled guest process has executed functionality responsive to which the virtualization component is to execute a virtualization based operation, wherein the virtualization based operation is one that is not visible to the guest operating system. The virtualization component causes the guest operating system to de-schedule the currently scheduled guest process and schedule at least one separate guest process. The virtualization component then executes the virtualization based operation concurrently with the execution of the at least one separate guest process. Responsive to completing the execution of the virtualization based operation, the virtualization component causes the guest operating system to re-schedule the de-scheduled guest process. | 09-09-2010 |
20100229174 | Synchronizing Resources in a Computer System - Synchronizing processes in a computer system includes creating a predictability model for a process. The predictability model establishes a predicted time slot for a resource that will be needed by the process. The method further requires establishing a predictive request for the resource at the predicted time slot. The predictive request establishes a place holder associated with the process. In addition, the method requires accessing another resource needed by the process for a period of time before the predicted time slot, submitting a request for the resource at the predicted time slot, and processing the request for the process at the resource. | 09-09-2010 |
20100235842 | WORKFLOW PROCESSING SYSTEM, AND METHOD FOR CONTROLLING SAME - According to the present invention, any deficiency caused by the use of a resource, which is in a different state from that assumed upon workflow registration, can be prevented. The workflow processing method of the present invention acquires and holds a resource or feature quantity, which is required upon workflow execution, so as to employ it upon workflow execution. In this manner, after execution of the workflow, the present invention can avoid the workflow execution result which is not intended by a user who has registered the workflow. | 09-16-2010 |
20100242041 | Real Time Multithreaded Scheduler and Scheduling Method - In a particular embodiment, a method is disclosed that includes receiving an interrupt at a first thread, the first thread including a lowest priority thread of a plurality of executing threads at a processor at a first time. The method also includes identifying a second thread, the second thread including a lowest priority thread of a plurality of executing threads at a processor at a second time. The method further includes directing a subsequent interrupt to the second thread. | 09-23-2010 |
20100251250 | LOCK-FREE SCHEDULER WITH PRIORITY SUPPORT - Techniques for implementing a lock-free scheduler with ordering support are described herein. In addition to the foregoing, other aspects are described in the claims, drawings, and text forming a part of the present disclosure. It can be appreciated by one of skill in the art that one or more various aspects of the disclosure may include but are not limited to circuitry and/or programming for effecting the herein-referenced aspects of the present disclosure; the circuitry and/or programming can be virtually any combination of hardware, software, and/or firmware configured to effect the herein-referenced aspects depending upon the design choices of the system designer. | 09-30-2010 |
20100251251 | APPARATUS AND METHOD FOR CPU LOAD CONTROL IN MULTITASKING ENVIRONMENT - An apparatus and a method for a Central Processing Unit (CPU) load control in a portable terminal capable of multitasking are provided. The method includes determining, by an application, an expected CPU load from a load table, requesting, by the application, a determination whether the expected CPU load is acceptable by providing the expected CPU load to a CPU load manager, providing, by the CPU load manager, a response including a result indicating whether the expected CPU load is acceptable or not to the application and executing, by the CPU, the application based on the result. | 09-30-2010 |
20100269115 | Managing Threads in a Wake-and-Go Engine - A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism detects a thread running on a first processing unit within a plurality of processing units that is waiting for an event that modifies a data value associated with a target address. The wake-and-go mechanism creates a wake-and-go instance for the thread by populating a wake-and-go storage array with the target address. The operating system places the thread in a sleep state. Responsive to detecting the event that modifies the data value associated with the target address, the wake-and-go mechanism assigns the wake-and-go instance to a second processing unit within the plurality of processing units. The operating system on the second processing unit places the thread in a non-sleep state. | 10-21-2010 |
20100269116 | SCHEDULING AND/OR ORGANIZING TASK EXECUTION FOR A TARGET COMPUTING PLATFORM - Techniques are generally described relating to methods, apparatuses and articles of manufactures for scheduling and/or organizing execution of tasks on a computing platform. In various embodiments, the method may include identifying successively one or more critical time intervals, and scheduling and/or organizing task execution for each of the one or more identified critical time intervals. In various embodiments, one or more tasks to be executed may be scheduled to execute based in part on their execution completion deadlines. In various embodiments, organizing one or more tasks to execute may include selecting a virtual operating mode of the platform using multiple operating speeds lying on a convexity energy-speed envelope of the platform. Intra-task delay caused by switching operating mode may be considered. Other embodiments may also be described and/or claimed. | 10-21-2010 |
20100269117 | Method for Monitoring System Resources and Associated Electronic Device - A method, for monitoring resources of a system for performing a first task and a second task, includes calculating a first completion count of the first task; calculating a second completion count of the second task; and determining whether the resources of the system are exhausted according to the first completion count and the second completion count. | 10-21-2010 |
20100275211 | Method and apparatus for scheduling the issue of instructions in a multithreaded microprocessor - There is provided a method to dynamically determine which instructions from a plurality of available instructions to issue in each clock cycle in a multithreaded processor capable of issuing a plurality of instructions in each clock cycle, comprising the steps of: determining a highest priority instruction from the plurality of available instructions; determining the compatibility of the highest priority instruction with each of the remaining available instructions; and issuing the highest priority instruction together with other instructions compatible with the highest priority instruction in the same clock cycle; wherein the highest priority instruction cannot be a speculative instruction. The effect of this is that speculative instructions are only ever issued together with at least one non-speculative instruction. | 10-28-2010 |
20100281485 | Method For Changing Over A System Having Multiple Execution Units - A system having multiple execution units and a method for its changeover are provided. The system having multiple execution units has at least two execution units, and may be changed over between a performance operating mode, in which the execution units execute different programs, and a comparison operating mode, in which the execution units execute the same program. The system has a scheduler, which is called by an execution unit to ascertain the next program to be executed. The remaining execution units are prompted to also call the scheduler if the program ascertained by the first called scheduler is to be executed in a comparison operating mode. A changeover unit changes over the system having multiple execution units from the performance operating mode into the comparison operating mode if the program to be executed ascertained by the last called scheduler is to be executed in the comparison operating mode, this ascertained program to be executed being executed as the program having the highest priority by all execution units after the changeover of the system into the comparison operating mode. | 11-04-2010 |
20100287558 | THROTTLING OF AN INTERATIVE PROCESS IN A COMPUTER SYSTEM - Throttling of an iterative process in a computer system is disclosed. Embodiments of the present invention focus on non-productive iterations of an iterative process in a computer system. The number of productive iterations of the iterative process during a current timeframe is determined while the iterative process is executing. A count of the number of process starts for the iterative process during the current timeframe is stored. The count can be normalized to obtain a number of units of work handled during the current timeframe. A throttling schedule can be calculated, and the throttling schedule can be stored in the computer system. The throttling schedule can then be used to determine a delay time between iterations of the iterative process for a new timeframe. A formula can be used to calculate the throttling schedule. The throttling schedule can be overridden in accordance with a service level agreement (SLA), as well as for other reasons. | 11-11-2010 |
20100287559 | ENERGY-AWARE COMPUTING ENVIRONMENT SCHEDULER - A method includes receiving a process request, identifying a current state of a device in which the process request is to be executed, calculating a power consumption associated with an execution of the process request, and assigning an urgency for the process request, where the urgency corresponds to a time-variant parameter to indicate a measure of necessity for the execution of the process request. The method further includes determining whether the execution of the process request can be delayed to a future time or not based on the current state, the power consumption, and the urgency, and causing the execution of the process request, or causing a delay of the execution of the process request to the future time, based on a result of the determining. | 11-11-2010 |
20100299670 | SELECTIVE I/O PRIORITIZATION BY SYSTEM PROCESS/THREAD - Systems, methods, and apparatus to identify and prioritize application processes in one or more subsystems. Some embodiments identifying applications and processes associated with each application executing on a system, apply one or more priority rules to the identified applications and processes to generate priority information, and transmit the priority information to a subsystem. The subsystem then matches received requests with the priority information and services the processes according to the priority information. | 11-25-2010 |
20100306778 | LOCALITY-BASED SCHEDULING IN CONTINUATION-BASED RUNTIMES - A computer system establishes an execution environment for executing activities in a continuation based runtime including instantiating an activity scheduler configured to perform the following: scheduling activities for execution in the CBR. The activity scheduler resolves the scheduled activity's arguments and variables prior to invoking the scheduled activity using the activity's unique context. The activity scheduler also determines, based on the activity's unique context, whether the scheduled activity comprises a work item that is to be queued at the top of the execution stack and, based on the determination, queues the work item to the execution stack. The computer system executes the work items of the scheduled activity as queued in the execution stack of the established execution environment in the CBR. | 12-02-2010 |
20100306779 | WORKFLOW MANAGEMENT SYSTEM AND METHOD - Systems and methods improve the equitable distribution the processing capacity of a computing device processing work items retrieved from multiple queues in a workflow system. A retrieval priority is determined for each of the plurality of queues and work items are retrieved from each of the multiple queues according to the retrieval priority. The retrieved work items are then stored in a central data structure. Multiple processing components process the work items stored in the central data structure. The number of processing components is selectively adjusted to maximize efficiency. | 12-02-2010 |
20100325633 | Searching Regular Expressions With Virtualized Massively Parallel Programmable Hardware - Logic and state information suitable for execution on a programmable hardware device may be generated from a task, such as evaluating a regular expression against a corpus. Hardware capacity requirements of the logic and state information on the programmable hardware device may be estimated. Once estimated, a plurality of the logic and state information generated from a plurality of tasks may be distributed into sets such that the logic and state information of each set fits within the hardware capacity of the programmable hardware device. The tasks within each set may be configured to execute in parallel on the programmable hardware device. Sets may then be executed in series, permitting virtualization of the resources. | 12-23-2010 |
20100325634 | Method of Deciding Migration Method of Virtual Server and Management Server Thereof - Occupancy amount of physical resource of a virtual server(VS) is calculated based on maximum physical resource amount indicating performance of a physical server(PS), the occupied virtual resource coefficient indicating relation of physical resource amount used by the VS to the physical resource amount allocated to the VS and the allocated physical resource coefficient indicating relation of the allocated physical resource to the maximum physical resource amount of the PS, and change value of the occupied physical resource amount from a predetermined occupied physical resource amount is calculated based on the calculated occupancy amount and the predetermined occupied physical resource amount. The migration time required of the VS is calculated based on the calculated change value, variation ratio indicating degree of influence exerted by change of the occupied virtual resource coefficient of the VS on the required migration time and reference execution time set based on the predetermined occupied physical resource amount. | 12-23-2010 |
20100325635 | Method for correct-by-construction development of real-time-systems - Methods and implementations for constructing a real-time system are disclosed. The real-time system includes at least one module, each module having at least one mode. According to an embodiment, a method comprises: defining a mode period for each mode for a repeated execution of the respective mode by the corresponding module; for each mode, defining one or more synchronous tasks to be executed by the real-time system, whereby each synchronous task is associated with a logical execution time during which the task execution has to be completed; defining an integer number of time-slots for the mode period of each mode; assigning to each task at least one time slot during which the task is to be executed. | 12-23-2010 |
20100333098 | DYNAMIC TAG ALLOCATION IN A MULTITHREADED OUT-OF-ORDER PROCESSOR - Various techniques for dynamically allocating instruction tags and using those tags are disclosed. These techniques may apply to processors supporting out-of-order execution and to architectures that supports multiple threads. A group of instructions may be assigned a tag value from a pool of available tag values. A tag value may be usable to determine the program order of a group of instructions relative to other instructions in a thread. After the group of instructions has been (or is about to be) committed, the tag value may be freed so that it can be re-used on a second group of instructions. Tag values are dynamically allocated between threads; accordingly, a particular tag value or range of tag values is not dedicated to a particular thread. | 12-30-2010 |
20100333099 | MESSAGE SELECTION FOR INTER-THREAD COMMUNICATION IN A MULTITHREADED PROCESSOR - A method and circuit arrangement process a workload in a multithreaded processor that includes a plurality of hardware threads. Each thread receives at least one message carrying data to process the workload through a respective inbox from among a plurality of inboxes. A plurality of messages are received at a first inbox among the plurality of inboxes, wherein the first inbox is associated with a first thread among the plurality of hardware threads, and wherein each message is associated with a priority. From the plurality of received messages, a first message is selected to process in the first thread based on that first message being associated with the highest priority among the received messages. A second message is selected to process in the first thread based on that second message being associated with the earliest time stamp among the received messages and in response to processing the first message. | 12-30-2010 |
20100333100 | VIRTUAL MACHINE CONTROL DEVICE, VIRTUAL MACHINE CONTROL METHOD, AND VIRTUAL MACHINE CONTROL PROGRAM - In a case where a task execution unit ( | 12-30-2010 |
20100333101 | VIRTUALISED RECEIVE SIDE SCALING - A method for receiving packet data by means of a data processing system having a plurality of processing cores and supporting a network interface device and a set of at least two software domains, each software domain carrying a plurality of data flows and each supporting at least two delivery channels, the method comprising: receiving at the network interface device packet data that is part of a particular data flow; selecting in dependence on one or more characteristics of the packet data a delivery channel of a particular one of the software domains, said delivery channel being associated with a particular one of the processing cores of the system; and mapping the incoming packet data into said selected delivery channel such that receive processing of the packet is performed by the same processing core that performed receive processing for preceding packets of that data flow. | 12-30-2010 |
20100333102 | Distributed Real-Time Operating System - A distributed control system and methods of operating such a control system are disclosed. In one embodiment, the distributed control system is operated in a manner in which interrupts are at least temporarily inhibited from being processed to avoid excessive delays in the processing of non-interrupt tasks. In another embodiment, the distributed control system is operated in a manner in which tasks are queued based upon relative timing constraints that they have been assigned. In a further embodiment, application programs that are executed on the distributed control system are operated in accordance with high-level and/or low-level requirements allocated to resources of the distributed control system. | 12-30-2010 |
20110004882 | Method and system for scheduling a thread in a multiprocessor system - A method for scheduling a thread on a plurality of processors that includes obtaining a first state of a first processor in the plurality of processors and a second state of a second processor in the plurality of processors, wherein the thread is last executed on the first processor, and wherein the first state of the first processor includes the state of a cache of the first processor, obtaining a first estimated instruction rate to execute the thread on the first processor using an estimated instruction rate function and the first state, obtaining a first estimated global throughput for executing the thread on the first processor using the first estimated instruction rate and the second state, obtaining a second estimated global throughput for executing the thread on the second processor using the second state, comparing the first estimated global throughput with the second estimated global throughput to obtain a comparison result, and executing the thread, based on the comparison result, on one selected from a group consisting of the first processor and the second processor, wherein the thread performs an operation on one of the plurality of processors. | 01-06-2011 |
20110004883 | Method and System for Job Scheduling - Logical processors/hardware contexts are assigned to different jobs/threads in a multithreaded/multicore environment. There are provided a number of different sorting algorithms, from which one is periodically selected on the basis of whether the present algorithm is giving satisfactory results or not. The period is preferably a super-context interval. The different sorting algorithms preferably include a software/OS priority. A second sorting algorithm may include sorting according to hardware performance measurements. The judgement of satisfactory performance is preferably based on the difference between a desired number of time quantum attributed per super-context switch interval to each job/thread and a real number of time quantum attributed per super-context switch interval to each job/thread. | 01-06-2011 |
20110010721 | Managing Virtualized Accelerators Using Admission Control, Load Balancing and Scheduling - A system and method is shown that includes an admission control module that resides in a management/driver domain, the admission control module to admit a domain that is part of a plurality of domains, into the computer system based upon one of a plurality of accelerators satisfying a resource request of the domain. The system and method also includes a load balancer module, which resides in the management/driver domain, the load balancer to balance at least one load from the plurality of domains across the plurality of accelerators. Further, the system and method also includes a scheduler module that resides in the management/driver domain, the scheduler to multiplex multiple requests from the plurality of domains to one of the plurality of accelerators. | 01-13-2011 |
20110010722 | MEMORY SWAP MANAGEMENT METHOD AND APPARATUS, AND STORAGE MEDIUM - A memory swap management method that can preferentially place in a primary storage device a process that has a high possibility of being executed next, thereby shortening the time to start executing the next process. A planned execution sequence of jobs is stored when there are a plurality of jobs waiting to be executed. A process as a swap-out candidate and a process as a swap-in candidate are determined based on the execution sequence and types of processes stored in the primary storage device. According to the determination, the process as the swap-out candidate is swapped out from the primary storage device to a secondary storage device, and the process as the swap-in candidate is swapped in from the secondary storage device into an area of the primary storage device freed as a result of the swap-out. | 01-13-2011 |
20110029978 | DYNAMIC MITIGATION OF THREAD HOGS ON A THREADED PROCESSOR - Systems and methods for efficient thread arbitration in a processor. A processor comprises a multi-threaded resource. The resource may include an array 8of entries which may be allocated by threads. A thread arbitration table corresponding to a given thread stores a high and a low threshold value in each table entry. A thread history shift register (HSR) indexes the table, wherein each bit of the HSR indicates whether the given thread is a thread hog. When the given thread has more allocated entries in the array than the high threshold of the table entry, the given thread is stalled from further allocating array entries. Similarly, when the given thread has fewer allocated entries in the array than the low threshold of the selected table entry, the given thread is permitted to allocate entries. In this manner, threads that hog dynamic resources can be mitigated such that more resources are available to other threads that are not thread hogs. This can result in a significant increase in overall processor performance. | 02-03-2011 |
20110029979 | Systems and Methods for Task Execution on a Managed Node - Systems and methods for executing tasks on a managed node remotely coupled to a management node are provided. A management controller of the management node may be configured to determine at least one execution policy for a task, schedule the task for execution, receive system information data from the managed node, based at least on the received system information, determine if the received system information complies with the at least one execution policy, and if the received information complies with the at least one execution policy, forward the task from the management controller to the managed node for execution. | 02-03-2011 |
20110029980 | LOW DEPTH PROGRAMMABLE PRIORITY ENCODERS - An apparatus having a plurality of first circuits, second circuits, third circuits and fourth circuits is disclosed. The first circuits may be configured to generate a plurality of first signals in response to (i) a priority signal and (ii) a request signal. The second circuits may be configured to generate a plurality of second signals in response to the first signals. The third circuits may be configured to generate a plurality of enable signals in response to the second signals. The fourth circuits may be configured to generate collectively an output signal in response to (i) the enable signals and (ii) the request signal. A combination of the first circuits, the second circuits, the third circuits and the fourth circuits generally establishes a programmable priority encoder. The second signals may be generated independent of the enable signals. | 02-03-2011 |
20110035751 | Soft Real-Time Load Balancer - The present disclosure is based on a multi-core or multi-processor virtualized environment that comprises both time-sensitive and non-time-sensitive tasks. The present disclosure describes techniques that use a plurality of criteria to choose a processing resource that is to execute tasks. The present disclosure further describes techniques to re-schedule queued tasks from one processing resource to another processing resource, based on a number of criteria. Through load balancing techniques, the present invention both (i) favors the processing of soft real-time tasks arising from media servers and applications, and (ii) prevents “starvation” of the non-real-time general computing applications that co-exist with the media applications in a virtualized environment. These techniques, in the aggregate, favor the processing of soft real-time tasks while also reserving resources for non-real-time tasks. These techniques manage multiple processing resources to balance the competing demands of soft real-time tasks and of non-real-time tasks. | 02-10-2011 |
20110035752 | Dynamic Techniques for Optimizing Soft Real-Time Task Performance in Virtual Machines - Methods are disclosed that dynamically improve soft real-time task performance in virtualized computing environments under the management of an enhanced hypervisor comprising a credit scheduler. The enhanced hypervisor analyzes the on-going performance of the domains of interest and of the virtualized data-processing system. Based on the performance metrics disclosed herein, some of the governing parameters of the credit scheduler are adjusted. Adjustments are typically performed cyclically, wherein the performance metrics of an execution cycle are analyzed and, if need be, adjustments are applied in a later execution cycle. In alternative embodiments, some of the analysis and tuning functions are in a separate application that resides outside the hypervisor. The performance metrics disclosed herein include: a “total-time” metric; a “timeslice” metric; a number of “latency” metrics; and a “count” metric. In contrast to prior art, the present invention enables on-going monitoring of a virtualized data-processing system accompanied by dynamic adjustments based on objective metrics. | 02-10-2011 |
20110041134 | PLUGGABLE COMPONENT INTERFACE - A system, method, and computer program product are provided for initiating an application in communication with a database management system via a bridge. Application memory is allocated to the application from a shared memory space within the database management system. | 02-17-2011 |
20110041135 | DATA PROCESSOR AND DATA PROCESSING METHOD - A data processing method has a device control thread for each peripheral device capable of an independent operation, a CPU processing thread for each data processing that is performed by a CPU, a control thread equipped with a processing part for constructing an application. The control thread checks an output from the thread related with each processing part, performs with a higher priority from the processing part in which output data of the preprocessing part as a configuration of the application exists and that is near termination, and instructs execution of the each device control thread and the CPU processing thread, and data input/output. Each of device control thread and CPU processing thread processes the data according to the instructions, and sends a processing result and a notification to the control thread. | 02-17-2011 |
20110055840 | METHOD FOR MANAGING THE SHARED RESOURCES OF A COMPUTER SYSTEM, A MODULE FOR SUPERVISING THE IMPLEMENTATION OF SAME AND A COMPUTER SYSTEM HAVING ONE SUCH MODULE - The disclosure aims to solve the general problem of managing the system with multiple resources of different types. In particular, the disclosure is intended for the sharing of resources between multiple applications that can be executed on a computer platform for situations involving the addition of new resources that were not initially provided in order to achieve these objectives, conflicts are avoided between shared resources starting at the application, with access rights being allocated for each application, while an opening is maintained for the addition of new applications and resources. More specifically, according to this method for managing the resources of a computer system, that are shared between multiple applications, allocation rules are provided during the execution of the applications and the rules generate access rights for each application in relation to each shared resource in the form of successive steps. The steps are controlled for each shared resource by a specific control module and, with each command, a decision criteria module parameterization step checks the rule for allocating access rights, whereby the decision criteria can be shared between at least parts of the control modules. | 03-03-2011 |
20110055841 | ACCESS CONTROL APPARATUS, ACCESS CONTROL PROGRAM, AND ACCESS CONTROL METHOD - When a new program is set to start processing using a resource such as a memory, and the resource has been allocated to another program, which is currently running, an access control apparatus | 03-03-2011 |
20110061056 | PORTABLE DEVICE AND METHOD FOR PROVIDING SHORTCUTS IN THE PORTABLE DEVICE - A method and a portable device provide shortcuts in an operating system of the portable device. The method displays the shortcuts in a user interface of the operating system on a display unit of the portable device when a first process is operating in the portable device. An application menu corresponding to the shortcut is displayed when the shortcut is activated, where the application menu comprises a list of a plurality of applications. The first process is executed as a background process when one of the applications is selected on the user interface as a second process, and the second process is executed as a foreground process. | 03-10-2011 |
20110067032 | METHOD AND SYSTEM FOR RESOURCE MANAGEMENT USING FUZZY LOGIC TIMELINE FILLING - In one or more embodiments, a method and system for scheduling resources is provided. The method includes receiving, in a processor, a plurality of concurrent processing requests. Each concurrent processing request is associated with at least one device configured to perform one or more different tasks at a given time. The at least one device has a predefined processing capacity. If one or more of the plurality of concurrent processing requests exceeds the predefined capacity of the at least one device at the given time, the processor determines a priority score for each concurrent processing request based, at least in part, on a time value associated with each concurrent processing request and whether any one of the concurrent processing requests is currently being processed at the given time. Responsive to the determined priority score at the given time, a highest priority processing request is executed for the at least one device. | 03-17-2011 |
20110072435 | PRIORITY CONTROL APPARATUS AND PRIORITY CONTROL METHOD - A priority control apparatus according to the present invention includes: an OS execution unit which executes first tasks that run on a first OS and second tasks that run on a second OS; a task priority obtainment unit which obtains the priority of an execution task which is a first task being executed by the OS execution unit and the priority of a requested task which is a second task whose execution is being requested to the OS execution unit; and a priority changing unit which, in the case where the priority of the requested task is higher than the priority of the execution task, changes the priorities of the first tasks to be lower than the priority of the requested task and higher than the next lower priority to the requested task among the second tasks, while maintaining the relative order of the priorities among the first tasks. | 03-24-2011 |
20110078691 | STRUCTURED TASK HIERARCHY FOR A PARALLEL RUNTIME - The present invention extends to methods, systems, and computer program products for a structured task hierarchy for a parallel runtime. The parallel execution runtime environment permits flexible spawning and attachment of tasks to one another to form a task hierarchy. Parent tasks can be prevented from completing until any attached child sub-tasks complete. Exceptions can be aggregated in an exception array such that any aggregated exceptions for a task are available when the task completes. A shield mode is provided to prevent tasks from attaching to another task as child tasks. | 03-31-2011 |
20110078692 | COALESCING MEMORY BARRIER OPERATIONS ACROSS MULTIPLE PARALLEL THREADS - One embodiment of the present invention sets forth a technique for coalescing memory barrier operations across multiple parallel threads. Memory barrier requests from a given parallel thread processing unit are coalesced to reduce the impact to the rest of the system. Additionally, memory barrier requests may specify a level of a set of threads with respect to which the memory transactions are committed. For example, a first type of memory barrier instruction may commit the memory transactions to a level of a set of cooperating threads that share an L1 (level one) cache. A second type of memory barrier instruction may commit the memory transactions to a level of a set of threads sharing a global memory. Finally, a third type of memory barrier instruction may commit the memory transactions to a system level of all threads sharing all system memories. The latency required to execute the memory barrier instruction varies based on the type of memory barrier instruction. | 03-31-2011 |
20110078693 | METHOD FOR REDUCING THE WAITING TIME WHEN WORK STEPS ARE EXECUTED FOR THE FIRST TIME - A method and a medical computer system for executing the method are disclosed for reducing the waiting time for at least one user of the computer system when they first execute at least one work step in the computer system. The method includes pre-starting a process which is not yet assigned to a user, and loading the services into the process, which the applications initiated by a user to execute the at least one work step are very likely to call without the user already being assigned to the process. | 03-31-2011 |
20110078694 | CONTROL APPARATUS, CONTROL SYSTEM AND COMPUTER PROGRAM - A system management layer changes a current program with a program (door lock failure diagnosis judgment program, security judgment program, door lock judgment program, keyless entry judgment program or the like) to be executed by an application layer, in accordance with an operation mode of on-vehicle equipment. Priorities of programs are previously stored for each operation mode, and a priority judgment program contributes to judge the priority of operation request based on the operation mode. Thus, plural programs of each hierarchal layer are categorized into groups per operation mode, although complicating in the single hierarchal layer. Therefore, it is possible to prevent the priority judgment processing from complicating for the operation request output by each computer program | 03-31-2011 |
20110088037 | SINGLE-STACK REAL-TIME OPERATING SYSTEM FOR EMBEDDED SYSTEMS - A real time operating system (RTOS) for embedded controllers having limited memory includes a continuations library, a wide range of macros that hide continuation point management, nested blocking functions, and a communications stack. The RTOS executes at least a first and second task and uses a plurality of task priorities. The tasks share only a single stack. The task scheduler switches control to the highest-priority task. The continuations library provides macros to automatically manage the continuation points. The yield function sets a first continuation point in the first task and yields control to the task scheduler, whereupon the task scheduler switches to the second task and wherein at a later time the task scheduler switches control back to the first task at the first continuation point. The nested blocking function invokes other blocking functions from within its body and yields control to the task scheduler. | 04-14-2011 |
20110093858 | Semi-automated reciprocal scheduling - Schedules which include reciprocal events, such as schedules for youth hockey leagues, can be created using a system in which users can invite one another to schedule games based on information selected through an interface and reciprocal dates which are automatically identified by a suitably programmed computer. Information related to games and schedules can be stored in a database which can be accessed and modified by different users depending on their roles and the permissions associated with those roles. | 04-21-2011 |
20110093859 | MULTIPROCESSOR SYSTEM, MULTIPLE THREADS PROCESSING METHOD AND PROGRAM - Conventionally, when the amount of data to be processed increases only for a part of threads, the processing efficiency of the whole transaction degrades. A multiprocessor system of the invention includes a plurality of processors executing multiple threads to process data; and a means which, based on an amount of data to be processed for each thread, determines a condition which an order in which the plurality of processors execute the threads should satisfy and starts to execute each thread so that the condition is satisfied. | 04-21-2011 |
20110099552 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING PROCESSOR ENTITY TASKS IN A MULTIPLE-PROCESSING ENTITY SYSTEM - A system, computer program and a method, the method for scheduling processor entity tasks in a multiple-processing entity system includes: receiving task data structures from multiple processing entities; wherein a task data structure represents a task to be executed by a processing entity; and scheduling an execution of the tasks by a multiple purpose entity. | 04-28-2011 |
20110107342 | PROCESS SCHEDULER EMPLOYING ORDERING FUNCTION TO SCHEDULE THREADS RUNNING IN MULTIPLE ADAPTIVE PARTITIONS - A system is set forth that includes a processor, one or more memory storage units, and software code stored in the one or more memory storage units. The software code is executable by the processor to generate a plurality of adaptive partitions that are each associated with one or more process threads. Each of the plurality, of adaptive partitions has one or more corresponding scheduling attributes that are assigned to it. The software code further includes a scheduling system that is executable by the processor for selectively allocating the processor to run the process threads based on a comparison between ordering function values for each adaptive partition. The ordering function value for each adaptive partition is calculated using one or more of the scheduling attributes of the corresponding adaptive partition. The scheduling attributes that may be used to calculate the ordering function value include, for example, 1) the process budget, such as a guaranteed time budget, of the adaptive partition, 2) the critical budget, if any, of the adaptive partition, 3) the rate at which the process threads of an adaptive partition consume processor time, or the like. For each adaptive partition that is associated with a critical thread, a critical ordering function value also may be calculated. The scheduling system may compare the ordering function value with the critical ordering function value of the adaptive partition to determine the proper manner of billing the adaptive partition for the processor allocation used to run its associated critical threads. Methods of implementing various aspects of such a system are also set forth. | 05-05-2011 |
20110113431 | Method and apparatus for scheduling tasks to control hardware devices - In a method of scheduling tasks for controlling hardware devices, a specified task having the execution right in a current time slice is terminated by depriving the execution right therefrom, when a time during which the execution right continues reaches the activation time given to the specified task. An identification process is performed when each reference cycle has been completed or each task has been terminated. In the identification process, i) when there remain, time-guaranteed tasks which have not been terminated in the current time slice, a time-guaranteed task whose priority is maximum among the remaining tasks is identified, and ii) when there remain no un-terminated time-guaranteed tasks in the current slice, of remaining non-time-guaranteed tasks which are not terminated yet in the current time slice, a non-time-guaranteed task whose priority is maximum is identified. The execution right is assigned to the identified task through the identification process. | 05-12-2011 |
20110113432 | COMPRESSED STORAGE MANAGEMENT - Compressed storage management includes assigning a selection priority and a priority level to multiple data units stored in an uncompressed portion of a storage resource. The management can further include compressing data units and storing the compressed data units in a compressed portion of the storage resource. The data units in the compressed portion are stored in regions, which each store data units having the same selection priority or the same selection priority level. | 05-12-2011 |
20110119674 | SCHEDULING METHOD, SCHEDULING APPARATUS AND MULTIPROCESSOR SYSTEM - A thread status managing unit organizes a plurality of threads into groups and manages the status of the thread groups. A ready queue queues thread groups in a ready state or a running state in the order of priority and, within the same priority level, in the FIFO order. An assignment list generating unit sequentially retrieves the thread groups from the ready queue. The assignment list appends a retrieved thread group to a thread assignment list only when all threads belonging to the retrieved thread group are assignable to the respective processors at the same time. A thread assigning unit assigns all threads belonging to the thread groups stored in the thread assignment list to the respective processors. | 05-19-2011 |
20110126204 | SCALABLE THREAD LOCKING WITH CUSTOMIZABLE SPINNING - Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system. | 05-26-2011 |
20110126205 | SYSTEM AND A METHOD FOR PROCESSING SYSTEM CALLS IN A COMPUTERIZED SYSTEM THAT IMPLEMENTS A KERNEL - A computer implementing a kernel, the computer including: (a) a processor that is configured to run processes in kernel mode and to run other processes not in kernel mode, wherein the processor is configured to run in the kernel mode the following processes: (i) selecting a rule out of a group of rules that is stored in a kernel memory of the computer, in response to system call information that pertains to a system call made to a kernel entity of the kernel; (ii) assigning a priority to the system call in response to the rule selected; and (iii) selectively enabling transmission of the system call to a hardware device of the computerized entity, in response to the priority assigned to the system call; (b) a memory that includes the memory kernel; and (c) the hardware device that is configured to execute the system call, wherein execution of the system call by the hardware device results in modifying a state of the hardware device. | 05-26-2011 |
20110126206 | OPERATIONS MANAGEMENT APPARATUS OF INFORMATION-PROCESSING SYSTEM - Information processing equipment and power/cooling facilities are managed together for power savings without degrading system processing performance. An operations management apparatus | 05-26-2011 |
20110154345 | Multicore Processor Including Two or More Collision Domain Networks - Implementations and techniques for multicore processors having a domain interconnection network configured to associate a first collision domain network with a second collision domain network in communication are generally disclosed. | 06-23-2011 |
20110154346 | TASK SCHEDULER FOR COOPERATIVE TASKS AND THREADS FOR MULTIPROCESSORS AND MULTICORE SYSTEMS - In a computer system with a multi-core processor, the execution of tasks is scheduled in that a first queue for new tasks and a second queue for suspended tasks are related to a first core, and a third queue for new tasks and a fourth queue for suspended tasks are related to a second core. The tasks have instructions, the new tasks are tasks where none of the instructions have been executed by any of the cores, and the suspended tasks are tasks where at least one of the instructions has been executed by any of the cores. New tasks are popped from the first queue to the first core; and in case the first queue being empty, tasks are popped to the first queue in the following preferential order: suspended tasks from the second queue, new task from the third queue, and new tasks from the fourth queue. | 06-23-2011 |
20110154347 | Interrupt and Exception Handling for Multi-Streaming Digital Processors - A multi-streaming processor has a plurality of streams for streaming one or more instruction threads, a set of functional resources for processing instructions from streams, and interrupt handler logic. The logic detects and maps interrupts and exceptions to one or more specific streams. In some embodiments, one interrupt or exception may be mapped to two or more streams, and in others two or more interrupts or exceptions may be mapped to one stream. Mapping may be static and determined at processor design, programmable, with data stored and amendable, or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping. Interrupts may be external interrupts generated by devices external to the processor software (internal) interrupts generated by active streams, or conditional, based on variables. After interrupts are acknowledged, streams to which interrupts or exceptions are mapped are vectored to appropriate service routines. | 06-23-2011 |
20110161969 | Consolidating CPU - Cache - Memory Access Usage Metrics - A computer system is provided with a processing chip having one or more processor cores, with the processing chip in communication with an operating system having kernel space and user space. Each processor core has multiple core threads to share resources of the core, with each thread managed by the operating system to function as an independent logical processor within the core. A logical extended map of the processor core is created and supported, with the map including each of the core threads indicating usage of the operating system, including user space and kernel space, and cache, memory, and non-memory. An operating system scheduling manager is provided to schedule a routine on the processor core by allocating the routine to different core threads based upon thread availability as demonstrated in the map, and thread priority. | 06-30-2011 |
20110161970 | METHOD TO REDUCE QUEUE SYNCHRONIZATION OF MULTIPLE WORK ITEMS IN A SYSTEM WITH HIGH MEMORY LATENCY BETWEEN COMPUTE NODES - Disclosed are a method, a system and a computer program product of operating a data processing system that can include or be coupled to multiple processor cores. The multiple processor cores can be coupled to a memory that can include multiple priority queues associated with multiple respective priorities and store multiple work items. Work items stored in the multiple priority queues can be associated with a bit mask which is associated with a respective priority queue and can be routed to respective groups of one or more processors based on the associated bit mask. In one or more embodiments, at least two groups of processor cores can include at least one processor core that is common to both of the at least two groups of processor cores. | 06-30-2011 |
20110161971 | Method and Data Processing Device for Processing Requests - Disclosed are a data processing device, a method and a computer program product for processing requests in the data processing device. The data processing device includes at least one processor and at least one memory. The at least one memory includes a set of data including information for processing requests received from at least one client and computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the data processing device at least to perform: notify, prior to processing a request, a client making the request to optionally update data associated with the request; and process the request based on the updated data, if the data is updated by the client. | 06-30-2011 |
20110167427 | COMPUTING SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM PREVENTING STARVATION - A computing system, method and computer-readable medium is provided. To prevent a starvation phenomenon from occurring in a priority-based task scheduling, a plurality of tasks may be divided into a priority-based group and other groups. The groups to which the tasks belong may be changed. | 07-07-2011 |
20110173625 | Wake-and-Go Mechanism with Prioritization of Threads - A hardware private array is a thread state storage that is embedded within the processor or within logic associated with a bus or wake-and-go logic. The hardware private array and/or wake-and-go array may have a limited storage area. Therefore, each thread may have an associated priority. If there is insufficient space in the hardware private array, then the wake-and-go mechanism may compare the priority of the thread to the priorities of the threads already stored in the hardware private array and wake-and-go array. If the thread has a higher priority than at least one thread already stored in the hardware private array and wake-and-go array, then the wake-and-go mechanism may remove a lowest priority thread, meaning the thread is removed from hardware private array and wake-and-go array and converted to a flee model. | 07-14-2011 |
20110173626 | EFFICIENT MAINTENANCE OF JOB PRIORITIZATION FOR PROFIT MAXIMIZATION IN CLOUD SERVICE DELIVERY INFRASTRUCTURES - Systems and methods are disclosed for efficient maintenance of job prioritization for profit maximization in cloud-based service delivery infrastructures with multi-step cost structure support by breaking multiple steps in the SLA of a job into corresponding cost steps; generating a segmented cost function for each cost step; creating a cost-based-scheduling (CBS)-priority value associated with a validity period for each segment based on the segmented cost function; and choosing the job with the highest CBS priority value. | 07-14-2011 |
20110173627 | INFORMATION-PROCESSING DEVICE AND PROGRAM - When executing plural application programs in parallel, a control unit assigns a small storage area to each application program so that a part of a function implemented by execution of each application program is provided. When providing a service of high value to a user, a control unit assigns a large storage area to any one of the application programs so that a full function that is implemented by execution of the application program is provided. | 07-14-2011 |
20110185362 | System and method for integrating software schedulers and hardware interupts for a deterministic system - The problem which is being addressed by this invention is the lack of determinism in mass market operating systems. This invention provides a mechanism for mass market operating systems running on mass market hardware to be extended to create a true deterministic responsive environment. | 07-28-2011 |
20110185363 | TASK SWITCHING APPARATUS, METHOD AND PROGRAM - A method of assigning task management blocks for first type tasks to time slot information on a one-by-one basis, assigning a plurality of task management blocks for second type tasks to time slot information, selecting a task management block according to a priority classification when switching to the time slot of the time slot information, and switching to the time slot except the time slot information. Additionally a task switching apparatus selects the task management block assigned to the time slot and executes the task. | 07-28-2011 |
20110202924 | Asynchronous Task Execution - Techniques for asynchronous task execution are described. In an implementation, tasks may be initiated and executed asynchronously, thereby allowing a plurality of calls to be made in parallel. Each task may be associated with a respective timeout that triggers an end to execution of the task. If a timeout for a low priority task expires without completing both the low priority task and a relatively higher priority task, then the low priority task may use the relatively higher priority task to extend execution time of the low priority task in order to allow additional time to perform the low priority task. | 08-18-2011 |
20110209154 | THREAD SPECULATIVE EXECUTION AND ASYNCHRONOUS CONFLICT EVENTS - In an embodiment, asynchronous conflict events are received during a previous rollback period. Each of the asynchronous conflict events represent conflicts encountered by speculative execution of a first plurality of work units and may be received out-of-order. During a current rollback period, a first work unit is determined whose speculative execution raised one of the asynchronous conflict events, and the first work unit is older than all other of the first plurality of work units. A second plurality of work units are determined, whose ages are equal to or older than the first work unit, wherein each of the second plurality of work units are assigned to respective executing threads. Rollbacks of the second plurality of work units are performed. After the rollbacks of the second plurality of work units are performed, speculative executions of the second plurality of work units are initiated in age order, from oldest to youngest. | 08-25-2011 |
20110209155 | SPECULATIVE THREAD EXECUTION WITH HARDWARE TRANSACTIONAL MEMORY - In an embodiment, if a self thread has more than one conflict, a transaction of the self thread is aborted and restarted. If the self thread has only one conflict and an enemy thread of the self thread has more than one conflict, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread and the enemy thread only conflicts with the self thread and the self thread has a key that has a higher priority than a key of the enemy thread, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread, the enemy thread only conflicts with the self thread, and the self thread has a key that has a lower priority than the key of the enemy thread, the transaction of the self thread is aborted. | 08-25-2011 |
20110219380 | MARSHALING RESULTS OF NESTED TASKS - The present invention extends to methods, systems, and computer program products for marshaling results of nested tasks. Unwrap methods are used to reduce the level of task nesting and insure that appropriate results are marshaled between tasks. A proxy task is used to represent the aggregate asynchronous operation of a wrapping task and a wrapped task. The proxy task has a completion state that is at least indicative of the completion state of the aggregate asynchronous operation. The completion state of the aggregate asynchronous operation is determined and set from one or more of the completion state of the wrapping task and the wrapped task. The completion state of the proxy task can be conveyed to calling logic to indicate the completion state of the aggregate asynchronous operation to the calling logic. | 09-08-2011 |
20110225590 | SYSTEM AND METHOD OF EXECUTING THREADS AT A PROCESSOR - A method and system for executing a plurality of threads are described. The method may include mapping a thread specified priority value associated with a dormant thread to a thread quantized priority value associated with the dormant thread if the dormant thread becomes ready to run. The method may further include adding the dormant thread to a ready to run queue and updating the thread quantized priority value. A thread quantum value associated with the dormant thread may also be updated, or a combination of the quantum value and quantized priority value may be both updated. | 09-15-2011 |
20110225591 | HYPERVISOR, COMPUTER SYSTEM, AND VIRTUAL PROCESSOR SCHEDULING METHOD - A hypervisor calculates the total number of processor cycles (the number of processor cycles of one or more physical processors) in a first length of time based on the sum of the operating frequencies of the respective physical processors and the first length of time for each first length of time (for example, a scheduling initialization cycle T | 09-15-2011 |
20110231853 | METHOD AND APPARATUS FOR MANAGING REALLOCATION OF SYSTEM RESOURCES - A capability is provided for reallocating, to a first borrower that is requesting resources, resources presently allocated to a second borrower. A method for allocating a resource of a system includes receiving a request for a system resource allocation from a first borrower, determining a request priority of the first borrower based on a present resource allocation associated with the first borrower, determining a hold priority of a second borrower based on a present resource allocation associated with the second borrower, and determining, using the first borrower request priority and the second borrower hold priority, whether to reallocate any of the second borrower resource allocation to the first borrower. | 09-22-2011 |
20110231854 | Method and Infrastructure for Optimizing the Utilization of Computer System's Resources - The present invention optimizes the utilization of computer system resources by considering predefined performance targets of multithreaded applications using the resources. The performance and utilization information for a set of multithreaded applications is provided. Using the performance and utilization information, the invention determines overutilized resources. Using the performance information, the invention also identifies threads and corresponding applications using an overutilized resource. The priority of the identified threads using said overutilized resource is adjusted to maximise a number of applications meeting their performance targets. The adjustments of priorities are executed via a channel that provides the performance and utilization information. | 09-22-2011 |
20110231855 | APPARATUS AND METHOD FOR CONTROLLING PRIORITY - A priority control apparatus, includes a job operation information storage unit stores, as job operation information on a per job operation basis for a plurality of job operations, a process and an object used by the process with the process mapped to the object, each job operation being executed by a plurality of processes; a delay determiner determines a first job operation that is delayed from among the plurality of job operations; and a priority controller identifies a second job operation sharing an object used in the first job operation by referencing the job operation information storage unit, identifies a process, using an object not used in the first job operation, from among the processes executing the second job operation identified, and lowers a priority at which the identified process is to be executed. | 09-22-2011 |
20110231856 | System and method for dynamically managing tasks for data parallel processing on multi-core system - A dynamic task management system and method for data parallel processing on a multi-core system are provided. The dynamic task management system may generate a registration signal for a task to be parallel processed, may generate a dynamic management signal used to dynamically manage at least one task, in response to the generated registration signal, and may control the at least one task to be created or cancelled in at least one core in response to the generated dynamic management signal. | 09-22-2011 |
20110239219 | PROTECTING SHARED RESOURCES USING SHARED MEMORY AND SOCKETS - Shared memory and sockets are used to protect shared resources in an environment where multiple operating systems execute concurrently on the same hardware. Rather than using spinlocks for serializing access to the shared resources, when a thread is unable to acquire a shared resource because that resource is already held by another thread, the thread that was unable to acquire the resource creates a socket with which it will wait to be notified that the shared resource has been released. The sockets may be network sockets or in-memory sockets that are accessible across the multiple operating systems; if sockets are not available in a particular implementation, communication technology that provides analogous services between operating systems may be used instead. In an optional aspect, fault tolerance is provided to address socket failures, in which case one or more threads may fall back (at least temporarily) to using spinlocks. As another option, a locking service may execute on each operating system to provide a programming interface through which threads can invoke operations for holding and releasing the lock. | 09-29-2011 |
20110239220 | FINE GRAIN PERFORMANCE RESOURCE MANAGEMENT OF COMPUTER SYSTEMS - Execution of a plurality of tasks by a processor system are monitored. Based on this monitoring, tasks requiring adjustment of performance resources are identified by calculating at least one of a progress error or a progress limit error for each task. Thereafter, performance resources of the processor system allocated to each identified task are adjusted. Such adjustment can comprise: adjusting a clock rate of at least one processor in the processor system executing the task, adjusting an amount of cache and/or buffers to be utilized by the task, and/or adjusting an amount of input/output (I/O) bandwidth to be utilized by the task. Related systems, apparatus, methods and articles are also described. | 09-29-2011 |
20110239221 | Method and Apparatus for Assigning Thread Priority in a Processor or the Like - In a multi-threaded processor, thread priority variables are set up in memory. The actual assignment of thread priority is based on the expiration of a thread precedence counter. To further augment, the effectiveness of the thread precedence counters, starting counters are associated with each thread that serve as a multiplier for the value to be used in the thread precedence counter. The value in the starting counters are manipulated so as to prevent one thread from getting undue priority to the resources of the multi-threaded processor. | 09-29-2011 |
20110246995 | CACHE-AWARE THREAD SCHEDULING IN MULTI-THREADED SYSTEMS - The disclosed embodiments provide a system that facilitates scheduling threads in a multi-threaded processor with multiple processor cores. During operation, the system executes a first thread in a processor core that is associated with a shared cache. During this execution, the system measures one or more metrics to characterize the first thread. Then, the system uses the characterization of the first thread and a characterization for a second, second thread to predict a performance impact that would occur if the second thread were to simultaneously execute in a second processor core that is also associated with the cache. If the predicted performance impact indicates that executing the second thread on the second processor core will improve performance for the multi-threaded processor, the system executes the second thread on the second processor core. | 10-06-2011 |
20110246996 | DYNAMIC PRIORITY QUEUING - Techniques are provided for dynamically re-ordering operation requests that have previously been submitted to a queue management unit. After the queue management unit has placed multiple requests in a queue to be executed in an order that is based on priorities that were assigned to the operations, the entity that requested the operations (the “requester”) sends one or more priority-change messages. The one or more priority-change messages include requests to perform operations that have already been queued. For at least one of the operations, the priority assigned to the operation in the subsequent request is different from the priority that was assigned to the same operation when that operation was initially queued for execution. Based on the change in priority, the operation whose priority has change is placed at a different location in the queue, relative to the other operations in the queue that were requested by the same requester. | 10-06-2011 |
20110246997 | ROUTING AND DELIVERY OF DATA FOR ELECTRONIC DESIGN AUTOMATION WORKLOADS IN GEOGRAPHICALLY DISTRIBUTED CLOUDS - Electronic design automation (EDA) libraries are delivered using a geographically distributed private cloud including EDA design centers and EDA library stores. EDA projects associated with an EDA library are determined by matching information describing the EDA library with information describing the projects. A set of design centers hosting the projects is determined. A data delivery model is determined for transmitting the EDA library to the design centers. The EDA library is scheduled for delivery to the design centers based on a deadline associated with a project stage that requires the EDA library. Network links with specialized hardware for transmitting data are determined in the private cloud by measuring their deterioration in performance on increase of data transmission load. These links are used for delivering EDA libraries expected to be used urgently for a stage of an EDA project. | 10-06-2011 |
20110246998 | METHOD FOR REORGANIZING TASKS FOR OPTIMIZATION OF RESOURCES - A method of reorganizing a plurality of task for optimization of resources and execution time in an environment is described. In one embodiment, the method includes mapping of each task to obtain qualitative and quantitative assessment of each functional elements and variables within the time frame for execution of each tasks, representation of data obtained from the mapping in terms of a matrix of dimensions N×N, wherein N represents total number of tasks and reorganizing the tasks in accordance with the represented data in the matrix for the execution, wherein reorganizing the tasks provides for both static and dynamic methodologies. It is advantageous that the present invention determines optimal number of resources required to achieve a practical overall task completion time and can be adaptable to non computer applications. | 10-06-2011 |
20110246999 | METHOD AND APPARATUS FOR ASSIGNING CANDIDATE PROCESSING NODES IN A STREAM-ORIENTED COMPUTER SYSTEM - A method of choosing jobs to run in a stream based distributed computer system includes determining jobs to be run in a distributed stream-oriented system by deciding a priority threshold above which jobs will be accepted, below which jobs will be rejected. Overall importance is maximized relative to the priority threshold based on importance values assigned to all jobs. System constraints are applied to ensure jobs meet set criteria. | 10-06-2011 |
20110252428 | Virtual Queue Processing Circuit and Task Processor - A queue control circuit controls the placement and retrieval of a plurality of tasks in a plurality of types of virtual queues. State registers are associated with respective tasks. Each of the state registers stores a task priority order, a queue ID of a virtual queue, and the order of placement in the virtual queue. Upon receipt of a normal placement command ENQ_TL, the queue control circuit establishes, in the state register for the placed task, QID of the virtual queue as the destination of placement and an order value indicating the end of the queue. When a reverse placement command ENQ_TP is received, QID of the destination virtual queue and an order value indicating the start of the queue are established. When a retrieval command DEQ is received, QID is cleared in the destination virtual queue. | 10-13-2011 |
20110252429 | Opportunistic Multitasking - Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. By way of example, thread priority levels may be overlapped between the foreground and background states. In addition, system resource availability may be restricted based on whether a process is receiving user input. In some instances, an application may be suspended rather than being placed into the background state. Implementation of the disclosed services may be substantially transparent to the executing user applications and, in some cases, may be performed without the user application's explicit cooperation. | 10-13-2011 |
20110258632 | Dynamically Migrating Channels - In one embodiment, the present invention includes a method of determining a relative priority between a first agent and a second agent, and assigning the first agent to a first channel and the second agent to a second channel according to the relative priority. Depending on the currently programmed status of the channels, information stored in at least one of the channels may be dynamically migrated to another channel based on the assignments. Other embodiments are described and claimed. | 10-20-2011 |
20110265089 | Executing Processes Using A Profile - A management entity for managing the execution priority of processes in a computing system, the management entity being configured to, in response to activation of a pre-stored profile defining execution priorities for each of a plurality of processes, cause those processes to be executed by the computing system in accordance with the respective priorities defined in the active profile. | 10-27-2011 |
20110265090 | MULTIPLE CORE DATA PROCESSOR WITH USAGE MONITORING - A data processor with a plurality of processor cores. Accumulated usage information of each of the plurality of processor cores is stored in a storage device within the data processor, wherein the accumulated usage information is indicative of accumulated usage of each processor core of the plurality of processor cores. Accumulated usage information for a core of the plurality of processor cores is updated in response to a determined use of the core. | 10-27-2011 |
20110265091 | SYSTEM AND METHOD FOR NORMALIZING JOB PROPERTIES - This disclosure provides a system and method for normalizing job properties. In one embodiment, a job manager is operable to identify a property of a job, with the job being associated with an operating environment. The job manager is further operable to normalize the property of the job and present the normalized property of the job to a user. | 10-27-2011 |
20110276972 | MEMORY-CONTROLLER-PARALLELISM-AWARE SCHEDULING FOR MULTIPLE MEMORY CONTROLLERS - Some embodiments of a processing system implement a memory-controller-parallelism-aware scheduling technique. In at least one embodiment of the invention, a method of operating a processing system includes scheduling a memory request requested by a thread of a plurality of threads executing on at least one processor according to thread priority information associated with the plurality of threads. The thread priority information is based on a maximum of a plurality of local memory bandwidth usage indicators for each thread of the plurality of threads. Each of the plurality of local memory bandwidth usage indicators for each thread corresponds to a respective memory controller of a plurality of memory controllers. | 11-10-2011 |
20110276973 | METHOD AND APPARATUS FOR SCHEDULING FOR MULTIPLE MEMORY CONTROLLERS - In at least one embodiment, a method includes locally scheduling a memory request requested by a thread of a plurality of threads executing on at least one processor. The memory request is locally scheduled according to a quality-of-service priority of the thread. The quality-of-service priority of the thread is based on a quality of service indicator for the thread and system-wide memory bandwidth usage information for the thread. In at least one embodiment, the method includes determining the system-wide memory bandwidth usage information for the thread based on local memory bandwidth usage information associated with the thread periodically collected from a plurality of memory controllers during a timeframe. In at least one embodiment, the method includes at each mini-timeframe of the timeframe accumulating the system-wide memory bandwidth usage information for the thread and updating the quality-of-service priority based on the accumulated system-wide memory bandwidth usage information for the thread. | 11-10-2011 |
20110276974 | SCHEDULING FOR MULTIPLE MEMORY CONTROLLERS - Some embodiments of a multi processor system implement a virtual-time-based quality-of-service scheduling technique. In at least one embodiment of the invention, a method includes scheduling a memory request to a memory from a memory request queue in response to expiration of a virtual finish time of the memory request. The virtual finish time is based on a share of system memory bandwidth associated with the memory request. The method includes scheduling the memory request to the memory from the memory request queue before the expiration of the virtual finish time of the memory request if a virtual finish time of each other memory request in the memory request queue has not expired and based on at least one other scheduling rule. | 11-10-2011 |
20110276975 | AUDIO DEVICE - An audio device is provided that is arranged for communication of data and signalling with a controller, signalling from the device to the controller being made in discrete time slots, the device comprising: a plurality of nodes, each assigned a priority value and each having one or more unsolicited response sources capable of generating an unsolicited response for transmission to the controller, wherein unsolicited responses generated from a particular node are assigned the priority value of that node; and unsolicited response management means operable to hold unsolicited responses generated by the plurality of nodes that are awaiting transmission to the controller, wherein when two or more unsolicited responses are awaiting transmission to the controller in the unsolicited response management means, the device is arranged to transmit the unsolicited response with the highest assigned priority value first, in the next free time slot. | 11-10-2011 |
20110276976 | EXECUTION ORDER DECISION DEVICE - An execution sequence decision device is capable of efficiently and appropriately determining the execution sequence of processing modules even in a case where those have a closed circuit in the input/output dependencies. A dependence evaluation sub-unit and an anti dependence evaluation sub-unit evaluate the dependence and anti dependence of each processing module in a processing module group. A priority evaluation sub-unit determines the priority of each processing module in the processing module group based on the dependence and anti dependence. An execution order allocation sub-unit allocates the top of execution sequence to one processing module that has the highest priority obtained by the priority evaluation sub-unit. An execution sequence allocation unit causes the respective sub-units to repeatedly execute the above-mentioned process every time the order of execution sequence of one processing module is determined, and then sequentially allocates the orders of execution sequence to the respective processing modules. | 11-10-2011 |
20110283286 | Methods and systems for dynamically adjusting performance states of a processor - A method for dynamically adjusting performance states of a processor includes executing a workload associated with a workload mode and determining a primary thread among all processor threads executing the workload. The method also includes calculating and setting a performance state (P state) of the processor based on the workload mode. | 11-17-2011 |
20110283287 | METHOD FOR ALLOCATING PRIORITY TO RESOURCE AND METHOD AND APPARATUS FOR OPERATING RESOURCE USING THE SAME - Disclosed are a method for allocating priority to resources, and a method and apparatus for operating resources using the same. The method for allocating priority to resources includes: selecting a resource block including at least one unit; determining a priority level of the selected resource block by reflecting a retrieval rate (or recovery rate) including a retrieval frequency and a retrieval period of the selected resource block; and allotting the determined priority level to the selected resource block. | 11-17-2011 |
20110283288 | PROCESSOR AND PROGRAM EXECUTION METHOD CAPABLE OF EFFICIENT PROGRAM EXECUTION - A processor for sequentially executing a plurality of programs using a plurality of register value groups stored in a memory that correspond one-to-one with the programs. The processor includes a plurality of register groups; a select/switch unit operable to select one of the plurality of register groups as an execution target register group on which a program execution is based, and to switch the selection target every time a first predetermined period elapses; a restoring unit operable to restore, every time the switching is performed, one of the register value groups into one of the register groups that is not selected as the execution target register group; a saving unit operable to save, prior to the restoring, register values in the register group targeted for restoring, by overwriting a register value group in the memory that corresponds to the register values; and a program execution unit operable to execute, every time the switching is performed, a program corresponding to a register value group in the execution target register group. | 11-17-2011 |
20110302586 | MULTITHREAD APPLICATION-AWARE MEMORY SCHEDULING SCHEME FOR MULTI-CORE PROCESSORS - A device may include a memory controller that identifies a multithread application, and adjusts a memory scheduling scheme for the multithread application based on the identification of the multithread application. | 12-08-2011 |
20110302587 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - A system-level management unit generates a system processing and makes a processing request to a task allocation unit of a user-level management unit. The task allocation unit schedules the system processing according to a procedure of an introduced user-level scheduling. A processing unit assigned to execute the system processing sends a notification of acceptability of the system processing to a main processing unit, by halts an application task in appropriate timing or when the processing of the current task is completed. When the notification is received within the time limit for execution, the system-level management unit has the processing unit start the system processing. | 12-08-2011 |
20110302588 | Assigning Priorities to Threads of Execution - Systems and processes may be implemented to receive threads of execution and assign priorities to the threads of execution. Threads of execution may include nonvolatile memory input/output threads, other input/output threads, and/or other non-input/output threads. A lower priority may be assigned to nonvolatile memory input/output threads than other input/output threads. An algorithm may determine an order of execution of the threads of execution. An order of execution may be at least partially based on assigned priorities. | 12-08-2011 |
20110314475 | RESOURCE ACCESS CONTROL - Various embodiments can control access to a computing resource (e.g., a memory resource) by detecting that a high priority activity is accessing the resource and preventing a lower priority activity from accessing the resource. The lower priority activity can be allowed access to the resource after the high priority activity is finished accessing the resource. Various embodiments enable memory operations to be mapped to account for changes in data ordering that can occur when a lower priority activity is suppressed. For example, when an activity requests that data be written to a logical memory region, a mapping is created that maps the logical memory region to a physical memory region. The data can then be written to the physical memory region. | 12-22-2011 |
20110314476 | BROADCAST RECEIVING APPARATUS AND SCHEDULING METHOD THEREOF - A broadcast receiving apparatus and scheduling method thereof are provided. The broadcast receiving apparatus includes: a communication interface which performs an input-output operation of the broadcast receiving apparatus in response to a request for an input-output event from at least one of the plurality of operating systems; and a controller which processes the requested input-output event according to a priority given to the operating system that has requested the input-output event. | 12-22-2011 |
20110314477 | FAIR SHARE SCHEDULING BASED ON AN INDIVIDUAL USER'S RESOURCE USAGE AND THE TRACKING OF THAT USAGE - Fair share scheduling to divide the total amount of available resource into a finite number of shares and allocate a portion of the shares to an individual user or group of users as a way to specify the resource proportion entitled by the user or group of users. The scheduling priority of jobs for a user or group of users depends on a customizable expression of allocated and used shares by that individual user or group of users. The usage by the user or group of users is accumulated and an exponential decay function is applied thereto in order to keep track of historic resource usage for a user or group of users by one piece of data and an update timestamp. | 12-22-2011 |
20110321052 | MUTLI-PRIORITY COMMAND PROCESSING AMONG MICROCONTROLLERS - A method, system and computer program product for serially transmitting processor commands of different execution priority. A front-end processor, for example, serially receives processor commands. A low-priority queue coupled to the front-end processor stores low-priority commands, and a high-priority queue coupled to the front-end processor stores high-priority commands. A controller enables transmission of commands from either the low-priority queue or the high-priority queue for execution. | 12-29-2011 |
20110321053 | MULTIPLE LEVEL LINKED LRU PRIORITY - A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource. | 12-29-2011 |
20110321054 | SYSTEMS AND METHODS FOR MANAGED SERVICE DELIVERY IN 4G WIRELESS NETWORKS - Systems and methods for managed service delivery at the edge in 4G wireless networks for: dynamic QoS (Quality of Service) provisioning and prioritization of sessions based on the task (current, future) of the workflow instance; predicting the current and future network requirements based on the current and future tasks of all business process sessions and prepare session QoS accordingly; providing an audit trail of business process execution; and reporting on business process execution. | 12-29-2011 |
20120005683 | Data Processing Workload Control - Data processing workload control in a data center is provided, where the data center includes computers whose operations consume power and a workload controller composed of automated computing machinery that controls the overall data processing workload in the data center. The data processing workload is composed of a plurality of specific data processing jobs, including scheduling, by the workload controller in dependence upon power performance information, the data processing jobs for execution upon the computers in the data center, the power performance information including power consumption at a plurality of power-conserving states for each computer in the data center that executes data processing jobs and dispatching by the workload controller the data processing jobs as scheduled for execution on computers in the data center. | 01-05-2012 |
20120005684 | PRIORITY ROLLBACK PROTOCOL - Mechanisms for enforcing limits to resource access are provided. In some embodiments, synchronization tools are used to reduce the worst case execution time of selected processing sequences. In one example, instructions from a first processing sequence are rolled back using rollback information stored in a data structure if a higher priority processing sequence seeks access to the resource. | 01-05-2012 |
20120011515 | Resource Consumption Template Processing Model - In one embodiment, a method determines a task to execute in a computer processing system. A resource consumption template from a plurality of resource consumption templates is determined for the task. The plurality of resource consumption templates have different priorities. A computer processing system determines resources for the task based on the determined resource consumption template. Also, the computer processing system processes the task using the allocated resources. The processing of the task is prioritized based on the priority of the resource consumption template. | 01-12-2012 |
20120023499 | DETERMINING WHETHER A GIVEN DIAGRAM IS A CONCEPTUAL MODEL - Systems and methods for scheduling events in a virtualized computing environment are provided. In one embodiment, the method comprises scheduling one or more events in a first event queue implemented in a computing environment, in response to determining that number of events in the first event queue is greater than a first threshold value, wherein the first event queue comprises a first set of events received for purpose of scheduling, wherein said first set of events remain unscheduled; mapping the one or more events in the first event queue to one or more server resources in a virtualized computing environment; receiving a second set of events included in a second event queue, wherein one more events in the second set of event are defined as having a higher priority than one or more events in the first event queue that have or have not yet been scheduled. | 01-26-2012 |
20120023500 | DYNAMICALLY ADJUSTING PRIORITY - A method to dynamically adjust priority may include providing a boost, by a processing device, to an element relative to at least one other element in response to a boost feature associated with the element being activated. Providing the boost to the element may include providing a predetermined longer duration of use of a shared use resource to the element relative to the at least one other element based on a boost setting associated with the element. The boost results in adjusting a priority of the element by allowing the element to complete a task in a shorter time period. | 01-26-2012 |
20120023501 | HIGHLY SCALABLE SLA-AWARE SCHEDULING FOR CLOUD SERVICES - An efficient cost-based scheduling method called incremental cost-based scheduling, iCBS, maps each job, based on its arrival time and SLA function, to a fixed point in the dual space of linear functions. Due to this mapping, in the dual space, the job will not change their locations over time. Instead, at the time of selecting the next job with the highest priority to execute, a line with appropriate angle in the query space is used to locate the current job with the highest CBS score in logarithmic time. Because only those points that are located on the convex hull in the dual space can be chosen, a dynamic convex hull maintaining method incrementally maintains the job with the highest CBS score over time. | 01-26-2012 |
20120023502 | ESTABLISHING THREAD PRIORITY IN A PROCESSOR OR THE LIKE - In a multi-threaded processor, one or more variables are set up in memory (e.g., a register) to indicate which of a plurality of executable threads has a higher priority. Once the variable is set, several embodiments are presented for granting higher priority processing to the designated thread. For example, more instructions from the higher priority thread may be executed as compared to the lower priority thread. Also, a higher priority thread may be given comparatively more access to a given resource, such as memory or a bus. | 01-26-2012 |
20120030682 | Dynamic Priority Assessment of Multimedia for Allocation of Recording and Delivery Resources - Techniques are provided to allocate resources used for recording multimedia or to retrieve recorded content and deliver it to a recipient. A request associated with multimedia for access to resources is received. A context associated with the multimedia is determined. Resources for the multimedia are allocated based on the context. | 02-02-2012 |
20120036512 | ENHANCED SHORTEST-JOB-FIRST MEMORY REQUEST SCHEDULING - In at least one embodiment of the invention, a method includes scheduling a memory request associated with a thread executing on a processing system. The scheduling is based on a job length of the thread and a priority step function of job length. The thread is one of a plurality of threads executing on the processing system. In at least one embodiment of the method, the priority step function is a function of ┌x/2n┐ for x<=m and P(x)=m/2 | 02-09-2012 |
20120042318 | AUTOMATIC PLANNING OF SERVICE REQUESTS - A method, system, and computer usable program product for automatic planning of service requests are provided in the illustrative embodiments. At an application executing in a computer, information is located in a ticket corresponding to the service request, the information being usable for categorizing the ticket. Using the information, a set of records is selected from a ticket history repository, the set of records including data representing a set of tickets processed before the ticket. A second ticket in the set of tickets includes information corresponding to the information in the ticket being processed. A category of the second ticket is selected as a suggested category for the ticket. A priority associated with the suggested category is identified. The suggested category and the priority are recommended for the ticket. | 02-16-2012 |
20120047509 | Systems and Methods for Improving Performance of Computer Systems - Priorities of an application and/or processes associated with an application executing on a computer is determined according to user-specific usage patterns of the application and stored for subsequent use, analysis and distribution. | 02-23-2012 |
20120047510 | IMAGE FORMING DEVICE - An image forming device includes a priority task startup detection unit to detect that startup of a priority task is completed, a job acceptance unit configured to change a status to a job acceptable status and accept a job when it is detected that the startup of the priority task is completed, a first startup control unit to start the non-priority task when a predetermined time has elapsed since it is detected that the startup of the priority task is completed, and a second startup control unit to start the non-priority task a job is accepted from the time it is detected that the startup of the priority task is completed to when the predetermined time has elapsed and if all processing of all jobs, including the accepted job is terminated. | 02-23-2012 |
20120060163 | METHODS AND APPARATUS ASSOCIATED WITH DYNAMIC ACCESS CONTROL BASED ON A TASK/TROUBLE TICKET - In some embodiments, an apparatus includes a memory, a processing device, a task division module implemented within at least one of the memory or the processing device, and a dynamic authentication module implemented within at least one of the memory or the processing device. The task division module is operable to receive a request associated with a task to be performed and to divide the task into multiple subtasks. The dynamic authentication module is operable to provide an access right to an operator from a set of operators assigned a subtask from the multiple subtasks. The access right for the operator is an access right to complete the subtask assigned to that operator from the set of operators. | 03-08-2012 |
20120060164 | METHOD FOR REGISTERING AND SCHEDULING EXECUTION DEMANDS - A method for registering and scheduling execution demands comprises steps of: providing an execution demand register having a plurality of execution demand registering flags describing whether an identical number of jobs are registered execution demands or not and priorities thereof; providing a lookup device, and using all possible values of the execution demand registering flags as addresses to respectively store thereinside a job sequence permutation, initial position and registering number corresponding to the job sequence permutation; when a job has to be executed successively, setting the value of the execution demand registering flag corresponding to the job; and in scheduling, using the value of the execution demand registering flag of the updated execution demand register as a lookup address to acquire the initial position and registering number from the lookup device, and finding out the job sequence permutation according to the acquired initial position and registering number to complete scheduling. | 03-08-2012 |
20120079490 | DISTRIBUTED WORKFLOW IN LOOSELY COUPLED COMPUTING - A method that can be used in a distributed workflow system that uses loosely coupled computation of stateless nodes to bring computation tasks to the compute nodes is disclosed. The method can be employed in a computing system, such as cloud computing system, that can generate a computing task separable into work units and performed by a set of distributed and decentralized workers. In one example, the method arranges the work units into a directed acyclic graph representing execution priorities between the work units. The plurality of distributed and decentralized workers query the directed acyclic graph for work units ready for execution based upon the directed acyclic graph. In one example, the method is included in a computer readable storage medium as a software program. | 03-29-2012 |
20120079491 | THREAD CRITICALITY PREDICTOR - Each thread of a multi-threaded application is assigned a ranking, referred to as thread criticality, based on the amount of time the thread is expected to take to complete one or more operations associated with the thread. More resources are assigned to threads having a higher thread criticality, in order to increase the rate at which the thread completes its operations. Thread criticality is determined using a perceptron model, whereby the thread criticality for a thread is a weighted sum of a set of data processing device performance characteristics associated with the thread, such as the number of instruction cache misses and data cache misses experienced by the thread. The weights of the perceptron model can be repeatedly adjusted over time based on repeated measurements that indicate the relative speed with which each thread is completing its operations. | 03-29-2012 |
20120084784 | SYSTEM AND METHOD FOR MANAGING MEMORY RESOURCE(S) OF A WIRELESS HANDHELD COMPUTING DEVICE - A method and system for managing one or more memory resources of a wireless handheld computing device is described. The method and system may include receiving a request to initiate a web browser module and receiving input for a web address. The method and system may also include receiving a file corresponding to the web address and reviewing one or more objects present within the file. The method and system may determine if an object already exists in the one or more memory resources. And if the object does not exist in the one or more memory resources, then the method and system may calculate a priority for the object. The priority of the object may then be assigned and stored. It may also be determined if the current object will exceed the threshold of the one or more memory resources, and discarding other objects with lower priority as needed. | 04-05-2012 |
20120089984 | Performance Monitor Design for Instruction Profiling Using Shared Counters - Counter registers are shared among multiple threads executing on multiple processor cores. An event within the processor core is selected. A multiplexer in front of each of a number of counters is configured to route the event to a counter. A number of counters are assigned for the event to each of a plurality of threads running for a plurality of applications on a plurality of processor cores, wherein each of the counters includes a thread identifier in the interrupt thread identification field and a processor identifier in the processor identification field. The number of counters is configured to have a number of interrupt thread identification fields and a number of processor identification fields to identify a thread that will receive a number of interrupts. | 04-12-2012 |
20120089985 | Sharing Sampled Instruction Address Registers for Efficient Instruction Sampling in Massively Multithreaded Processors - Sampled instruction address registers are shared among multiple threads executing on a plurality of processor cores. Each of a plurality of sampled instruction address registers are assigned to a particular thread running for an application on the plurality of processor cores. Each of the sampled instruction address registers are configured by storing in each of the sampled instruction address registers a thread identification of the particular thread in a thread identification field and a processor identification of a particular processor on which the particular thread is running in a processor identification field. | 04-12-2012 |
20120096468 | COMPUTE CLUSTER WITH BALANCED RESOURCES - A scheduler for a compute cluster that allocates computing resources to jobs to achieve a balanced distribution. The balanced distribution maximizes the number of executing jobs to provide fast response times for all jobs by, to the extent possible, assigning a designated minimum for each job. If necessary to achieve this minimum distribution, resources in excess of a minimum previously allocated to a job may be de-allocated, if those resources can be used to meet the minimum requirements of other jobs. Resources above those used to meet the minimum requirements of executing jobs are allocated based on a computed desired allocation, which may be developed based on respective job priorities. To meet the desired allocation, resources may be de-allocated from jobs having more than their desired allocation and re-allocated to jobs having less than their desired allocation of resources. | 04-19-2012 |
20120096469 | SYSTEMS AND METHODS FOR DYNAMICALLY SCANNING A PLURALITY OF ACTIVE PORTS FOR WORK - Systems and methods for scanning ports for work are provided. One system includes one or more processors, multiple ports, a first tracking mechanism, and a second tracking mechanism for tracking high priority work and low priority work, respectively. The processor(s) is/are configured to perform the below method. One method includes scanning the ports, finding high priority work on a port, and accepting or declining the high priority work. The method further includes changing a designation of the processor to TRUE in the first tracking mechanism if the processor accepts the high priority work such that the processor is allowed to perform the high priority work on the port. Also provided are computer storage mediums including computer code for performing the above method. | 04-19-2012 |
20120096470 | PRIORITIZING JOBS WITHIN A CLOUD COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach to prioritize jobs (e.g., within a cloud computing environment) so as to maximize positive financial impacts (or to minimize negative financial impacts) for cloud service providers, while not exceeding processing capacity or failing to meet terms of applicable Service Level Agreements (SLAs). Specifically, under the present invention a respective income (i.e., a cost to the customer), a processing need, and set of SLA terms (e.g., predetermined priorities, time constraints, etc.) will be determined for each of a plurality of jobs to be performed. The jobs will then be prioritized in a way that: maximizes cumulative/collective income; stays within the total processing capacity of the cloud computing environment; and meets the SLA terms. | 04-19-2012 |
20120096471 | APPARATUS AND METHOD FOR EXECUTING COMPONENTS BASED ON THREAD POOL - An apparatus for executing components based on a thread pool includes a component executor configured to have a set priority and period, to register components having the set priority and period, and to execute the registered components. Further, the apparatus for executing the components based on the thread pool includes a thread pool configured to allocate a thread for executing the component executor; and an Operating System (OS) configured to create an event for allocating the thread to the component executor in each set period. | 04-19-2012 |
20120096472 | VIRTUAL QUEUE PROCESSING CIRCUIT AND TASK PROCESSOR - A queue control circuit controls the placement and retrieval of a plurality of tasks in a plurality of types of virtual queues. State registers are associated with respective tasks. Each of the state registers stores a task priority order, a queue ID of a virtual queue, and the order of placement in the virtual queue. Upon receipt of a normal placement command ENQ_TL, the queue control circuit establishes, in the state register for the placed task, QID of the virtual queue as the destination of placement and an order value indicating the end of the queue. When a reverse placement command ENQ_TP is received, QID of the destination virtual queue and an order value indicating the start of the queue are established. When a retrieval command DEQ is received, QID is cleared in the destination virtual queue. | 04-19-2012 |
20120102497 | Mobile Computing Device Activity Manager - A system and a method are disclosed for an activity manager providing a centralized component for allocating resources of a mobile computing device among various activities. An activity represents work performed using computing device resources, such as processor time, memory, storage device space or network connections. An application or system service requests generation of an activity by the activity manager, causing the activity manager to associate a priority level with the activity request and identify resources used by the activity. Based on the priority level, resources used and current resource availability of the mobile computing device, the activity manager determines when the activity is allocated mobile computing device resources. Using the priority level allows the activity manager to optimize performance of certain activities, such as activities receiving data from a user. | 04-26-2012 |
20120124589 | MATRIX ALGORITHM FOR SCHEDULING OPERATIONS - The present invention provides a method and apparatus for implementing a matrix algorithm for scheduling instructions. One embodiment of the method includes selecting a first subset of instructions so that each instruction in the first subset is the earliest in program order of instructions associated with a corresponding one of a plurality of sub-matrices of a matrix that has a plurality of matrix entries. Each matrix entry indicates the program order of one pair of instructions that are eligible for execution. This embodiment also includes selecting, from the first subset of instructions, the instruction that is earliest in program order based on matrix entries associated with the first subset of instructions. | 05-17-2012 |
20120124590 | MINIMIZING AIRFLOW USING PREFERENTIAL MEMORY ALLOCATION - One embodiment provides a method of controlling memory in a computer system. Airflow is generated through an enclosure at a variable airflow rate to cool a plurality of memory banks at different locations within the enclosure. The airflow rate is controlled as a function of the temperature of one or more of the memory banks. Memory workload is selectively allocated to the memory banks according to expected differences in airflow, such as differences in airflow temperature, at each of the different locations. | 05-17-2012 |
20120124591 | SCHEDULER AND RESOURCE MANAGER FOR COPROCESSOR-BASED HETEROGENEOUS CLUSTERS - A system and method for scheduling client-server applications onto heterogeneous clusters includes storing at least one client request of at least one application in a pending request list on a computer readable storage medium. A priority metric is computed for each application, where the computed priority metric is applied to each client request belonging to that application. The priority metric is determined based on estimated performance of the client request and load on the pending request list. The at least one client request of the at least one application is scheduled based on the priority metric onto one or more heterogeneous resources. | 05-17-2012 |
20120131588 | APPARATUS AND METHOD FOR DATA PROCESSING IN HETEROGENEOUS MULTI-PROCESSOR ENVIRONMENT - An apparatus for data processing in a heterogeneous multi-processor environment are provided. The apparatus including an analysis unit configured to analyze 1) operations to be run in connection with data processing and 2) types and a number of processors available for the data processing, a partition unit configured to dynamically partition data into a plurality of data regions having different sizes based on the analyzed operations and operation-specific processor priority information, which is stored in advance of running the operations, and a scheduling unit configured to perform scheduling by allocating operations to be run in the data regions between the available processors. | 05-24-2012 |
20120137301 | RESOURCE UTILIZATION MANAGEMENT FOR A COMMUNICATION DEVICE - A technique for resource utilization management for a communication device includes provisioning | 05-31-2012 |
20120137302 | PRIORITY INFORMATION GENERATING UNIT AND INFORMATION PROCESSING APPARATUS - In an information processing device | 05-31-2012 |
20120144395 | Inter-Thread Data Communications In A Computer Processor - Inter-thread data communications in a computer processor with multiple hardware threads of execution, each hardware thread operatively coupled for communications through an inter-thread communications controller, where inter-thread communications is carried out by the inter-thread communications controller and includes: registering, responsive to one or more RECEIVE opcodes, one or more receiving threads executing the RECEIVE opcodes; receiving, from a SEND opcode of a sending thread, specifications of a number of derived messages to be sent to receiving threads and a base value; generating the derived messages, incrementing the base value once for each registered receiving thread so that each derived message includes a single integer as a separate increment of the base value; sending, to each registered receiving thread, a derived message; and returning, to the sending thread, an actual number of derived messages received by receiving threads. | 06-07-2012 |
20120144396 | Creating A Thread Of Execution In A Computer Processor - Creating a thread of execution in a computer processor, including copying, by a hardware processor opcode called by a user-level process, with no operating system involvement, register contents from a parent hardware thread to a child hardware thread, the child hardware thread being in a wait state, and changing, by the hardware processor opcode, the child hardware thread from the wait state to an ephemeral run state. | 06-07-2012 |
20120144397 | INFORMATION PROCESSING APPARATUS, METHOD, AND RECORDING MEDIUM - An information processing apparatus includes, a storage unit that stores an image to be transmitted, an update-frequency setter that sets, for respective sections set in the image to be transmitted, update frequencies of images stored for the sections in a predetermined period of time, an association-degree setter that sets association degrees to indicate degrees of association between the sections based on the update frequencies, a priority setter that identifies the section on which an operation is performed and sets a higher priority for the identified section and the section having a highest degree of association with the identified section than priorities for other sections, and a transmitter that transmits the image, stored by the storage unit, in sequence with the images stored for the sections whose set priority is higher first. | 06-07-2012 |
20120159498 | FAST AND LINEARIZABLE CONCURRENT PRIORITY QUEUE VIA DYNAMIC AGGREGATION OF OPERATIONS - Embodiments of the invention improve parallel performance in multi-threaded applications by serializing concurrent priority queue operations to improve throughput. An embodiment uses a synchronization protocol and aggregation technique that enables a single thread to handle multiple operations in a cache-friendly fashion while threads awaiting the completion of those operations spin-wait on a local stack variable, i.e., the thread continues to poll the stack variable until it has been set or cleared appropriately, rather than rely on an interrupt notification. A technique for an enqueue/dequeue (push/pop) optimization uses re-ordering of aggregated operations to enable the execution of two operations for the price of one in some cases. Other embodiments are described and claimed. | 06-21-2012 |
20120159499 | RESOURCE OPTIMIZATION - A method may include storing information associated with a number of tasks for processing a media file, where the information includes resource information identifying resources scheduled to fulfill the tasks. The method may also include identifying a first task associated with processing the media file, identifying a first resource scheduled to fulfill the first task, and determining whether the first resource is available to fulfill the first task. The method may further include determining, when the first resource is not available, whether an alternate resource is available to fulfill the first task, and scheduling, when an alternate resource is available, the alternate resource to fulfill the first task. | 06-21-2012 |
20120159500 | VALIDATION OF PRIORITY QUEUE PROCESSING - A method for validating outsourced processing of a priority queue includes configuring a verifier for independent, single-pass processing of priority queue operations that include insertion operations and extraction operations and priorities associated with each operation. The verifier may be configured to validate N operations using a memory space having a size that is proportional to the square root of N using an algorithm to buffer the operations as a series of R epochs. Extractions associated with each individual epoch may be monitored using arrays Y and Z. Insertions for the epoch k may monitored using arrays X and Z. The processing of the priority queue operations may be verified based on the equality or inequality of the arrays X, Y, and Z. Hashed values for the arrays may be used to test their equality to conserve storage requirements. | 06-21-2012 |
20120159501 | SYNCHRONIZATION SCHEDULING APPARATUS AND METHOD IN REAL-TIME MULT-CORE SYSTEM - A synchronization scheduling apparatus and method in a real-time multi-core system are described. The synchronization scheduling apparatus may include a plurality of cores, each having at least one wait queue, a storage unit to store information regarding a first core receiving a wake-up signal in a previous cycle among the plurality of cores, and a scheduling processor to schedule tasks stored in the at least one wait queue, based on the information regarding the first core. | 06-21-2012 |
20120167108 | Model for Hosting and Invoking Applications on Virtual Machines in a Distributed Computing Environment - The described method/system/apparatus uses intelligence to better allocate tasks/work items among the processors and computers in the cloud. A priority score may be calculated for each task/work unit for each specific processor. The priority score may indicate how well suited a task/work item is for a processor. The result is that tasks/work items may be more efficiently executed by being assigned to processors in the cloud that are better prepared to execute the tasks/work items. | 06-28-2012 |
20120167109 | FRAMEWORK FOR RUNTIME POWER MONITORING AND MANAGEMENT - Systems and methods of managing power in a computing platform may involve monitoring a runtime power consumption of two or more of a plurality of hardware components in the platform to obtain a plurality of runtime power determinations. The method can also include exposing one or more of the plurality of runtime power determinations to an operating system associated with the platform. | 06-28-2012 |
20120167110 | INFORMATION PROCESSING APPARATUS CAPABLE OF SETTING PROCESSING PRIORITY OF ACCESS, METHOD OF CONTROLLING THE INFORMATION PROCESSING APPARATUS, PROGRAM, AND STORAGE MEDIUM - An information processing apparatus that gives priority to an access made by a usual manual operation for execution of original functions of the apparatus, even when automatically programmed access for index creation from an external apparatus to the storage and the access for execution of original functions occur concurrently. A CPU causes a priority to be set to each processing requested by an request. The CPU executes the processing based on the set priority, and causes a processing result to a requesting source. If the received request is a specific request, the CPU causes calculation of a number of times that a time period elapsed after returning of the response until receipt of a next processing is within a predetermined time period. The CPU determines whether or not to change the priority based on the calculated number of times. | 06-28-2012 |
20120180059 | TIME-VALUE CURVES TO PROVIDE DYNAMIC QoS FOR TIME SENSITIVE FILE TRANSFERS - A method and apparatus has been shown and described which allows Quality of Service to be controlled at a temporal granularity. Time-value curves, generated for each task, ensure that mission resources are utilized in a manner which optimizes mission performance. It should be noted, however, that although the present invention has shown and described the use of time-value curves as applied to mission workflow tasks, the present invention is not limited to this application; rather, it can be readily appreciated by one of skill in the art that time-value curves may be used to optimize the delivery of any resource to any consumer by taking into account the dynamic environment of the consumer and resource. | 07-12-2012 |
20120180060 | PREDICTION BASED PRIORITY SCHEDULING - Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing. | 07-12-2012 |
20120185862 | Managing Scheduling of Processes - A mechanism dynamically modifies the base-priority of a spawned set of processes according to their actual resource utilization (CPU or I/O wait time) and to a priority class assigned to them during their startup. In this way it is possible to maximize the CPU and I/O resource usage without at the same time degrading the interactive experience of the users currently logged on the system. | 07-19-2012 |
20120192194 | Lock Free Acquisition and Release of a Semaphore in a Multi-Core Processor Environment - A method for an acquisition of a semaphore for a thread includes decrementing a semaphore count, storing a current thread context of the semaphore when the semaphore count is less than a first predetermined value, determining a release count of a pending queue associated with the semaphore where the pending queue indicates unpended threads of the semaphore, and adding the thread to the pending queue when the release count is less than a second predetermined value. | 07-26-2012 |
20120192195 | SCHEDULING THREADS - Scheduling threads in a multi-threaded/multi-core processor having a given instruction window, and scheduling a predefined number N of threads among a set of M active threads in each context switch interval are provided. The actual power consumption of each running thread during a given context switch interval is determined, and a predefined priority level is associated with each thread among the active threads based on the actual power consumption determined for the threads. The power consumption expected for each active thread during the next context switch interval in the current instruction window (CIW_Power_Th) is predicted, and a set of threads to be scheduled among the active threads are selected from the priority level associated with each active thread and the power consumption predicted for each active thread in the current instruction window. | 07-26-2012 |
20120192196 | COMMAND EXECUTION DEVICE, COMMAND EXECUTION SYSTEM, COMMAND EXECUTION METHOD AND COMMAND EXECUTION PROGRAM - In order to improve processing efficiency, a command execution device includes: a behavior type decision unit which decides a behavior type indicating the content of a data input/output operation, according to the content of data processing executed by an entered command; a command storage unit which refers to setting information set in advance for each of the behavior types, and stores the command in a command queue created for each priority level, based on the priority level included in the setting information; and a command execution unit which fetches, out of commands stored in the command queue, a command stored in a section of the command queue having the highest priority level from the command queue, and executes the command. | 07-26-2012 |
20120192197 | AUTOMATED CLOUD WORKLOAD MANAGEMENT IN A MAP-REDUCE ENVIRONMENT - A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device. | 07-26-2012 |
20120198461 | METHOD AND SYSTEM FOR SCHEDULING THREADS - A method for scheduling a new thread involves identifying a criticality level of the new thread, selecting a processor group according to the criticality level of the new thread and an existing assigned utilization level of the processor group to obtain a selected processor group, increasing an assigned utilization level of the selected processor group based on the new thread, and executing the new thread by the selected processor group. | 08-02-2012 |
20120198462 | WORKFLOW CONTROL OF RESERVATIONS AND REGULAR JOBS USING A FLEXIBLE JOB SCHEDULER - A scheduler receives at least one flexible reservation request for scheduling in a computing environment comprising consumable resources. The flexible reservation request specifies a duration and at least one required resource. The consumable resources comprise at least one machine resource and at least one floating resource. The scheduler creates a flexible job for the at least one flexible reservation request and places the flexible job in a prioritized job queue for scheduling, wherein the flexible job is prioritizes relative to at least one regular job in the prioritized job queue. The scheduler adds a reservation set to a waiting state for the at least one flexible reservation request. The scheduler, responsive to detecting the flexible job positioned in the prioritized job queue for scheduling next and detecting a selection of consumable resources available to match the at least one required resource for the duration, transfers the selection of consumable resources to the reservation and sets the reservation to an active state, wherein the reservation is activated as the selection of consumable resources become available and has uninterrupted use of the selection of consumable resources for the duration by at least one job bound to the flexible reservation. | 08-02-2012 |
20120198463 | PIPELINE NETWORK DEVICE AND RELATED DATA TRANSMISSION METHOD - A pipeline structure having a plurality of pipelines with varying data rates is used for transmitting data between different layers in a network device. Important data is transmitted by a faster pipeline, while less important data is transmitted by a slower pipeline. The size of each pipeline may be dynamically adjusted according the transmission status of each pipeline for improving the overall data efficiency. | 08-02-2012 |
20120198464 | SAFETY CONTROLLER AND SAFETY CONTROL METHOD - The present invention relates to time partitioning to prevent a failure of processing while suppressing execution delay of interrupt processing even when the interrupt processing is executed. A safety controller includes: a processor; a system program for controlling allocation of an execution time of the processor to a safety-related task, a non-safety-related task, and an interrupt processing task; and an interrupt handler. Upon generation of an interrupt, the processor executes the interrupt handler to reserve execution of the interrupt processing task as an execution reserved task, and executes the system program to schedule the tasks in accordance with scheduling information on a safety-related TP to which the safety-related task belongs, a non-safety-related TP to which the non-safety-related task belongs, and a reservation execution TP to which the execution reserved task belongs. When execution of a task in a previous TP is finished before the period of the previous TP prior to the execution reservation TP has expired, the execution time in the previous TP is allocated to the execution reserved task. | 08-02-2012 |
20120204184 | SIMULATION APPARATUS, METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A simulation apparatus is disclosed, including a group switching part. The group switching part refers to a priority management table, which manages priority information of priorities to assign a CPU for multiple groups of tasks stored in a storage area, and changes the priorities of the multiple groups of tasks, when an event occurs to activate a task to be executed in verifying of software by using a simulation. | 08-09-2012 |
20120204185 | WORKFLOW CONTROL OF RESERVATIONS AND REGULAR JOBS USING A FLEXIBLE JOB SCHEDULER - A scheduler receives flexible reservation requests for scheduling in a computing environment comprising consumable resources. The flexible reservation request specifies a duration and a required resource. The consumable resources comprise machine resources and floating resources. The scheduler creates a flexible job for the flexible reservation request and places the flexible job in a prioritized job queue for scheduling, wherein the flexible job is prioritizes relative to at least one regular job in the prioritized job queue. The scheduler adds a reservation set to a waiting state for the flexible reservation request. The scheduler, responsive to detecting the flexible job positioned in the prioritized job queue for scheduling next and detecting a selection of consumable resources available to match the at least one required resource for the duration, transfers the selection of consumable resources to the reservation and sets the reservation to an active state. | 08-09-2012 |
20120210325 | Method And Apparatus Of Smart Power Management For Mobile Communication Terminals Using Power Thresholds - A method is provided for use in a mobile communication terminal configured to support a plurality of applications, wherein each application is executed by performing one or more tasks. The method includes, in response to a scheduling request from an application, obtaining an indication of power supply condition at a requested run-time of at least one of the tasks. The method further includes obtaining a prediction of a rate of energy usage by the task at the requested run-time, estimating, from the predicted rate of energy usage, a total amount of energy needed to complete the task, and making a scheduling decision for the task. The scheduling decision comprises making a selection from a group of two or more alternative dispositions for the task. The selection is made according to a criterion that relates the run-time power-supply condition to the predicted rate of energy usage by the task and to the estimate of total energy needed to complete the task. | 08-16-2012 |
20120210326 | Constrained Execution of Background Application Code on Mobile Devices - The subject disclosure is directed towards a technology by which background application code (e.g., provided by third-party developers) runs on a mobile device in a way that is constrained with respect to resource usage. A resource manager processes a resource reservation request for background code, to determine whether the requested resources meet constraint criteria for that type of background code. If the criteria are met and the resources are available, the resources are reserved, whereby the background code is ensured priority access to its reserved resources. As a result, a properly coded background application that executes within its constraints will not experience glitches or other problems (e.g., unexpected termination) and thereby provide a good user experience. | 08-16-2012 |
20120210327 | Method for Packet Flow Control Using Credit Parameters with a Plurality of Limits - The present invention relates to a processor and a method for processing a data packet, the method including steps of decreasing a value of a first credit parameter when the data packet is admitted to a processor at least partly based on the value of the first credit parameter and a first limit of the first credit parameter, and increasing the value of the first credit parameter, in dependence on a data storage level in a buffer in which the data packet is stored before being admitted to the processor, the value of the first credit parameter not being increased, so as to become larger than a second limit of the first credit parameter, when the buffer is empty. | 08-16-2012 |
20120216206 | METHODS AND SYSTEMS FOR MANAGING DATA - Systems and methods for managing data, such as metadata or index databases. In one exemplary method, a notification that an existing file has been modified or that a new file has been created is received by an indexing software component, which then, in response to the notification performs an indexing operation, where the notification is either not based solely on time or user input or the notification includes an identifier that identifies the file. Other methods in data processing systems and machine readable media are also described. | 08-23-2012 |
20120216207 | DYNAMIC TECHNIQUES FOR OPTIMIZING SOFT REAL-TIME TASK PERFORMANCE IN VIRTUAL MACHINE - Methods to dynamically improve soft real-time task performance in virtualized computing environments under the management of an enhanced hypervisor comprising a credit scheduler. The enhanced hypervisor analyzes the on-going performance of the domains of interest and of the virtualized data-processing system. Based on the performance metrics disclosed herein, some of the governing parameters of the credit scheduler are adjusted. Adjustments are typically performed cyclically, wherein the performance metrics of an execution cycle are analyzed and adjustments may be applied in a later execution cycle. In alternative embodiments, some of the analysis and tuning functions are in a separate application that resides outside the hypervisor. The performance metrics disclosed herein include: a “total-time” metric; a “timeslice” metric; a number of “latency” metrics; and a “count” metric. In contrast to prior art, the present invention enables on-going monitoring of a virtualized data-processing system accompanied by dynamic adjustments based on objective metrics. | 08-23-2012 |
20120216208 | In-Car-Use Multi-Application Execution Device - An in-car-use multi-application execution device is provided that ensures safety while maintaining convenience by securing operation of a plurality of applications and suppressing occurrence of a termination process within a limited processing capacity without degrading a real-time feature. The in-car-use multi-application execution device dynamically predicts a processing time for each application, and schedules each application on the basis of the predicted processing time. If it is determined that an application failing to complete a process in a prescribed cycle exists as a result of the scheduling, a process is executed that terminates the application or degrades the function of the application on the basis of a preset priority order. | 08-23-2012 |
20120222035 | Priority Inheritance in Multithreaded Systems - A method includes determining that a first task having a first priority is blocked from execution at a multithreaded processor by a second task having a second priority that is lower than the first priority. A temporary priority of the second task is set to be equal to an elevated priority, such that in response to the second task being preempted from execution by another task, the second task is rescheduled for execution based on the elevated priority identified by the temporary priority. | 08-30-2012 |
20120222036 | IMAGE FORMING APPARATUS - An MFP is provided with a main CPU for controlling operation of the MFP according an operating condition set to the MFP, a job management table for sequentially registering input jobs by priority, and a job execution control portion for determining whether or not to permit execution of the job according to the order of registration from a job with high priority that is registered in the job management table. The job execution control portion calculates, based on a job condition of a job intended for permission determination, utilization of the CPU associated with execution of the job, then restricts an operating condition of the MFP in a case where the calculated CPU utilization exceeds a predetermined value, and permits execution of the job according to the restricted operating condition in a case where the CPU utilization when the operating condition is restricted becomes the predetermined value or lower. | 08-30-2012 |
20120233623 | USING A YIELD INDICATOR IN A HIERARCHICAL SCHEDULER - A method and system for scheduling the use of CPU time among processes using a scheduling tree having a yielding indicator. A scheduling tree represents a hierarchy of groups and processes that share central processing unit (CPU) time. A computer system assigns a yield indicator to a first node of the scheduling tree, which represents a first process that temporarily yields the CPU time. The computer system also assigns the yield indicator to each ancestor node of the first node in the scheduling tree. Each ancestor node represents a group to which the first process belongs. The computer system then selects a second process to run on the computer system based on the yield indicator in the scheduling tree. | 09-13-2012 |
20120233624 | APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - An apparatus includes a monitoring unit configured to monitor memory usage of a process in which multiple application programs are running, and a control unit configured to terminate one or more of the application programs when the memory usage of the process exceeds a first threshold. | 09-13-2012 |
20120246659 | TECHNIQUES TO OPTIMIZE UPGRADE TASKS - Techniques to prioritize and optimize the execution of upgrade operations are described. A technique may include determining the size of data blocks that are to be copied from one storage medium to another, and the dependencies of upgrade tasks on the data blocks and on other tasks. A task may be prioritized according to a weight that includes the cumulative sizes of the data blocks that it and its dependent tasks depend on. A data block copying may be prioritized according to the cumulative weights of the tasks that depend on that data block. Some embodiments may perform several data copying and/or tasks in parallel, rather than sequentially. Other embodiments are described and claimed. | 09-27-2012 |
20120254882 | Controlling priority levels of pending threads awaiting processing - A data processing apparatus comprises processing circuitry arranged to process processing threads using resources accessible to the processing circuitry. A pipeline is provided for handling at least two pending threads awaiting processing by the processing circuitry. The pipeline includes at least one resource-requesting pipeline stage for requesting access to resources for the pending threads. A priority controller controls priority levels of the pending threads. The priority levels define a priority with which pending threads are granted access to resources. When a pending thread reaches a final pipeline stage, if the request resources are not yet available then the priority level of that thread is raised selectively and the thread is returned to a first pipeline stage of the pipeline. If the requested resources are available then the thread is forwarded from the pipeline. | 10-04-2012 |
20120260256 | WORKLOAD MANAGEMENT OF A CONCURRENTLY ACCESSED DATABASE SERVER - Several methods and a system of a workload management of a concurrently accessed database server are disclosed. In one embodiment, a method includes applying a weight to a service class. The method also includes generating a priority of the service class. In addition, the method includes selecting a group based on the weight of the service class. The method further includes determining a priority level based on the priority of the service class. The method also includes generating a characteristic of a shadow process through the weight and the priority of the service class. In addition, the method includes executing a query. | 10-11-2012 |
20120260257 | SCHEDULING THREADS IN MULTIPROCESSOR COMPUTER - A computer program product for scheduling threads in a multiprocessor computer comprises computer program instructions configured to select a thread in a ready queue to be dispatched to a processor and determine whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, the computer program instructions are configured to select a processor, set a current processor priority register of the selected processor to least favored, and dispatch the thread from the ready queue to the selected processor. | 10-11-2012 |
20120266175 | METHOD AND DEVICE FOR BALANCING LOAD OF MULTIPROCESSOR SYSTEM - A method and a device for balancing load of a multiprocessor system relate to the resource allocation field of the multiprocessor system, for achieving the object of reducing the number of accessing a remote node memory or the amount of copying data when the processes migrated into a target Central Processing Unit (CPU) are performed. The method for balancing the load of the multiprocessor system comprises: determining the local CPU and the target CPU in the multiprocessor system; sequencing the migration priorities, based on the size of memory space occupied by the processes in the queue of the local CPU; wherein the less the memory space occupied by the process is, the higher the migration priority of the process is; and migrating the process whose migration priority is the highest other than the processes being performed in the queue of the local CPU into the target CPU. | 10-18-2012 |
20120284727 | Scheduling in Mapreduce-Like Systems for Fast Completion Time - A method and system for scheduling tasks is provided. A plurality of lower bound completion times is determined, using one or more computer processors and memory, for each of a plurality of jobs, each of the plurality of jobs including a respective subset plurality of tasks. A task schedule is determined for each of the plurality of processors based on the lower bound completion times. | 11-08-2012 |
20120284728 | Method for the Real-Time Ordering of a Set of Noncyclical Multi-Frame Tasks - A method for real-time scheduling of an application having a plurality m of software tasks executing at least one processing operation on a plurality N of successive data frames, each of said tasks i being defined at least, for each of said frames j, by an execution time C | 11-08-2012 |
20120291037 | METHOD AND APPARATUS FOR PRIORITIZING PROCESSOR SCHEDULER QUEUE OPERATIONS - A method and processor are described for implementing programmable priority encoding to track relative age order of operations in a scheduler queue. The processor may comprise a scheduler queue configured to maintain an ancestry table including a plurality of consecutively numbered row entries and a plurality of consecutively numbered columns. Each row entry includes one bit in each of the columns. Pickers are configured to pick an operation that is ready for execution based on the age of the operation as designated by the ancestry table. The column number of each bit having a select logic value indicates an operation that is older than the operation associated with the number of the row entry that the bit resides in. | 11-15-2012 |
20120291038 | METHOD FOR REDUCING INTER-PROCESS COMMUNICATION LATENCY - A method for handling a system call in an operating system executed by a processor is disclosed. The message comprises steps of receiving the system call to a called process from a calling process; if the system call is a synchronous system call and if a priority of the calling process is higher than a priority of the called process, increasing the priority of the called process to be at least the priority of the calling process; and switching context to the called process. | 11-15-2012 |
20120297394 | LOCK CONTROL IN MULTIPLE PROCESSOR SYSTEMS - A computer system comprising a plurality of processors and one or more storage devices. The system is arranged to execute a plurality of tasks, each task comprising threads and each task being assigned a priority from 1 to a whole number greater than 1, each thread of a task assigned the same priority as the task and each thread being executed by a processor. The system also provides lock and unlock functions arranged to lock and unlock data stored by a storage device responsive to such a request from a thread. A method of operating the system comprises maintaining a queue of threads that require access to locked data, maintaining an array comprising, for each priority, duration and/or throughput information for threads of the priority, setting a wait flag for a priority in the array according to a predefined algorithm calculated from the duration and/or throughput information in the array. | 11-22-2012 |
20120304186 | Scheduling Mapreduce Jobs in the Presence of Priority Classes - Techniques for scheduling one or more MapReduce jobs in a presence of one or more priority classes are provided. The techniques include obtaining a preferred ordering for one or more MapReduce jobs, wherein the preferred ordering comprises one or more priority classes, prioritizing the one or more priority classes subject to one or more dynamic minimum slot guarantees for each priority class, and iteratively employing a MapReduce scheduler, once per priority class, in priority class order, to optimize performance of the one or more MapReduce jobs. | 11-29-2012 |
20120304187 | DYNAMIC TASK ASSOCIATION - An apparatus, system, and method are disclosed for dynamic task association. The method includes maintaining a plurality of projects. Each project may include a plurality of tasks specific to the project. The method may also include detecting a change in a particular task of a first project that affects one or more tasks of a second project. The first project and the second project may be of the plurality of projects and the second project may be independent from the first project. The method may also include updating one or more tasks of the second project affected by the change in response to detecting the change in the particular task of the first project. | 11-29-2012 |
20120311596 | INFORMATION PROCESSING APPARATUS, COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM - A program reception task and an advertisement reception task are set. The program reception task defines an execution content which includes an execution schedule of a reception process for program data including video data and audio data of programs, and the advertisement reception task defines an execution content including an execution schedule of a reception process for advertisement data including at least one of video data, still image data, and audio data of advertisements. Then, the program reception task and the advertisement reception task are executed based on the execution schedules set in the program reception task and the advertisement reception task, respectively, to perform reception of the program data and reception of the advertisement data from a server independently from each other. | 12-06-2012 |
20120324461 | Effective Management Of Blocked-Tasks In Preemptible Read-Copy Update - A technique for managing read-copy update readers that have been preempted while executing in a read-copy update read-side critical section. A single blocked-tasks list is used to track preempted reader tasks that are blocking an asynchronous grace period, preempted reader tasks that are blocking an expedited grace period, and preempted reader tasks that require priority boosting. In example embodiments, a first pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking a current asynchronous grace period. A second pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking an expedited grace period. A third pointer may be used to segregate the blocked-tasks list into preempted reader tasks that do and do not require priority boosting. | 12-20-2012 |
20120324462 | VIRTUAL FLOW PIPELINING PROCESSING ARCHITECTURE - A computer system for embodying a virtual flow pipeline programmable processing architecture for a plurality of wireless protocol applications is disclosed. The computer system includes a plurality of functional units for executing a plurality of tasks, a synchronous task queue and a plurality of asynchronous task queues for linking the plurality of tasks to be executed by the functional units in a priority order, and a virtual flow pipeline controller. The virtual flow pipeline controller includes a processing engine for processing a plurality of commands; a scheduler, communicatively coupled to the processing engine, for selecting a next task for processing at run time for each of the plurality of functional units; a processing engine controller, communicatively coupled to the processing engine, for providing commands and arguments to the processing engine and monitoring command completion; and a task flow manager, communicatively coupled to the processing engine controller, for activating the next task for processing. Also disclosed is a computer-implemented method for executing a plurality of wireless protocol applications embodying a virtual flow pipeline programmable processing architecture in a computer system. | 12-20-2012 |
20120324463 | System for Managing Data Collection Processes - A system and process for managing data collection processes is disclosed. An apparatus that incorporates teachings of the present disclosure can include, a data collection system having a controller element that assigns to each of the processes a query interval according to a priority level of the data collection process for requesting use of processing resources, receiving one or more requests from the processes, once per respective query interval, for use of at least a portion of available processing resources, releases at least a portion of the available processing resources to a requesting one of the processes when the use of the available processing resources exceeds a utilization threshold. Additional embodiments are disclosed. | 12-20-2012 |
20120331473 | ELECTRONIC DEVICE AND TASK MANAGING METHOD - A task managing method is configured to manage tasks processed by an electronic device. The electronic device includes a central processing unit (CPU) capable of processing a plurality of the tasks at one time. The task managing method includes the steps of: detecting whether a predetermined status occurs; analyzing a current utilization rate of the CPU; determining whether the current utilization rate is greater than or equal to a predetermined utilization rate; and reducing some tasks being processed by the CPU to keep the CPU working normally, if the current utilization rate is greater than or equal to a predetermined utilization rate. | 12-27-2012 |
20120331474 | REAL TIME SYSTEM TASK CONFIGURATION OPTIMIZATION SYSTEM FOR MULTI-CORE PROCESSORS, AND METHOD AND PROGRAM - Disclosed is an automatic optimization system capable of searching for an allocation with a good performance from among a plurality of task allocations which can be scheduled in a system of a development target configured with a plurality of periodic tasks. A task allocation optimization system for a multi-core processor including a plurality of cores calculates a response time of each of a plurality of tasks which are core allocation decision targets, and outputs an accumulative value of the calculated response time as an evaluation function value which is an index representing excellence of a task allocation. A task allocation from which a good evaluation function value is calculated is searched based on the evaluation function value. A candidate having a good evaluation function value among a plurality of searched task allocation candidates is held. | 12-27-2012 |
20130007753 | ELASTIC SCALING FOR CLOUD-HOSTED BATCH APPLICATIONS - An elastic scaling cloud-hosted batch application system and method that performs automated elastic scaling of the number of compute instances used to process batch applications in a cloud computing environment. The system and method use automated elastic scaling to minimize job completion time and monetary cost of resources. Embodiments of the system and method use a workload-driven approach to estimate a work volume to be performed. This is based on task arrivals and job execution times. Given the work volume estimate, an adaptive controller dynamically adapts the number of compute instances to minimize the cost and completion time. Embodiments of the system and method also mitigate startup delays by computing a work volume in the near future and gradually starting up additional compute instances before they are needed. Embodiments of the system and method also ensure fairness among batch applications and concurrently executing jobs. | 01-03-2013 |
20130007754 | Joint Scheduling of Multiple Processes on a Shared Processor - A multi-process scheduler applies a joint optimization criterion to jointly schedule multiple processes executed on a shared processor. The scheduler determines, for each one of a plurality of processes having a predetermined processing time, at least one of an expected arrival time for input data and required delivery time for output data. The scheduler jointly determines process activation times for the processes based on said arrival/delivery, and the processing times, to meet a predetermined joint optimization criterion for the processes. The processes are scheduled on the shared processor according to the jointly determined activation times to minimize queuing delay. | 01-03-2013 |
20130007755 | METHODS, COMPUTER SYSTEMS, AND PHYSICAL COMPUTER STORAGE MEDIA FOR MANAGING RESOURCES OF A STORAGE SERVER - For managing a storage server having improving overall system performance, a first input/output (I/O) request is received. A first priority level is dynamically assigned to the first I/O request, the first I/O request associated with a performance level for an application residing on a host in communication with the storage server. A second I/O request of a second priority level is throttled to allow at least a portion of a predetermined amount of resources previously designated for performing the second I/O request to be re-allocated to performing the first I/O request. The second priority level is different than the first priority level. | 01-03-2013 |
20130007756 | Method for Generating an Optimised Hardware/Software Partitioning of Embedded Systems Using a Plurality of Control Appliances - The present invention relates to a computer-implemented method for an automatic synthesis of distributed embedded systems, wherein the tasks to be processed by the system are mapped to a hardware structure having a plurality of processing units such that predefined time limits of the tasks are met, comprising the steps of (a) assigning the tasks to the plurality of processing steps, with the following substeps: (aa) assigning a task to a processing unit; (bb) determining the outgoing event densities; (cc) comparing the output density towards the next task with a predefined threshold and assigning the next task to the same processing unit if the event density is below the threshold or assigning the next task to any other processing unit if the event density is smaller than the threshold; (dd) repeating steps (aa) to (cc) until all tasks are assigned to the processing units; (b) checking whether the costs of the given task assignment to the processing units satisfy a predefined solution criterion; (c) repeating steps (a) to (b) with a new task assignment to the processing units until the task assignment fulfils the predefined solution criteria; (d) assigning the tasks to the processes of the operational systems of the processing units assigned to the tasks; (e) checking whether the given task assignment to the processes of the operational systems of the processing units satisfies the predefined time criteria of the tasks; (f) calculating the costs associated with the given task assignment to the processes of the operational systems of the processing units if the predefined time criteria of the tasks are satisfied; (g) repeating steps (a) to (c) with a new task assignment to the processing units or repeating steps (d) to (f) with a new task assignment to the processes of the operational systems of the assigned processing units until the costs of the current solution satisfy a predefined solution criterion. | 01-03-2013 |
20130007757 | METHODS, COMPUTER SYSTEMS, AND PHYSICAL COMPUTER STORAGE MEDIA FOR MANAGING RESOURCES OF A STORAGE SERVER - For managing a storage server having improving overall system performance, a first input/output (I/O) request is received. A first priority level is dynamically assigned to the first I/O request, the first I/O request associated with a performance level for an application residing on a host in communication with the storage server. A second I/O request of a second priority level is throttled to allow at least a portion of a predetermined amount of resources previously designated for performing the second I/O request to be re-allocated to performing the first I/O request. The second priority level is different than the first priority level. | 01-03-2013 |
20130007758 | MULTI-CORE PROCESSOR SYSTEM, THREAD SWITCHING CONTROL METHOD, AND COMPUTER PRODUCT - A multi-core processor system includes a given core configured to switch at a prescribed switching period, threads assigned to the given core; identify whether the given core has switched threads at a period exceeding the prescribed switching period; correct the prescribed switching period into a shorter switching period, based on a difference of an actual switching period at which the threads have been switched by the given core and the prescribed switching period; and set the corrected switching period as the prescribed switching period. | 01-03-2013 |
20130014117 | ENERGY-AWARE COMPUTING ENVIRONMENT SCHEDULER - A method includes receiving a process request, identifying a current state of a device in which the process request is to be executed, calculating a power consumption associated with an execution of the process request, and assigning an urgency for the process request, where the urgency corresponds to a time-variant parameter to indicate a measure of necessity for the execution of the process request. The method further includes determining whether the execution of the process request can be delayed to a future time or not based on the current state, the power consumption, and the urgency, and causing the execution of the process request, or causing a delay of the execution of the process request to the future time, based on a result of the determining. | 01-10-2013 |
20130019247 | Method for using a temporary object handle - A method is provided for using a temporary object handle. The method performed at a resource manager includes: receiving an open temporary handle request from an application for a resource object, wherein a temporary handle can by asynchronously invalidated by the resource manager at any time; and creating a handle control block at the resource manager for the object, including an indication that the handle is a temporary handle. The method then includes: responsive to receiving a request from an application to use a handle, which has been invalidated by the resource manager, sending a response to the application that the handle is invalidated. | 01-17-2013 |
20130031556 | DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. In another embodiment, if a request to add one or more processing elements may cause future backpressure, the request may be refused. | 01-31-2013 |
20130031557 | System To Profile And Optimize User Software In A Managed Run-Time Environment - Method, apparatus, and system for monitoring performance within a processing resource, which may be used to modify user-level software. Some embodiments of the invention pertain to an architecture to allow a user to improve software running on a processing resources on a per-thread basis in real-time and without incurring significant processing overhead. | 01-31-2013 |
20130031558 | Scheduling Mapreduce Jobs in the Presence of Priority Classes - Techniques for scheduling one or more MapReduce jobs in a presence of one or more priority classes are provided. The techniques include obtaining a preferred ordering for one or more MapReduce jobs, wherein the preferred ordering comprises one or more priority classes, prioritizing the one or more priority classes subject to one or more dynamic minimum slot guarantees for each priority class, and iteratively employing a MapReduce scheduler, once per priority class, in priority class order, to optimize performance of the one or more MapReduce jobs. | 01-31-2013 |
20130036423 | SYSTEMS AND METHODS FOR BOUNDING PROCESSING TIMES ON MULTIPLE PROCESSING UNITS - Embodiments of the present invention provide improved systems and methods for processing multiple tasks. In one embodiment a method comprises: selecting a processing unit as a master processing unit from a processing cluster comprising multiple processing units, the master processing unit selected to execute master instruction entities; reading a master instruction entity from memory; scheduling the master instruction entity to execute on the master processing unit; identifying an execution group containing the master instruction entity, the execution group defining a set of related entities; when the execution group contains at least one slave instruction entity, scheduling the at least one slave instruction entity to execute on a processing unit other than the master processing unit during the execution of the master instruction entity; and terminating execution of instruction entities related by the execution group when a master instruction entity is executed that is not a member of the execution group. | 02-07-2013 |
20130042249 | Processing resource allocation within an integrated circuit supporting transaction requests of different priority levels - An integrated circuit | 02-14-2013 |
20130042250 | METHOD AND APPARATUS FOR IMPROVING APPLICATION PROCESSING SPEED IN DIGITAL DEVICE - A method and apparatus for improving application processing speed in a digital device which improve application processing speed for a digital device running in an embedded environment where processor performance may not be sufficiently powerful by detecting an execution request for an application, identifying a group to which the requested application belongs, among preset groups with different priorities and scheduling the requested application according to the priority assigned to the identified group, and executing the requested application based on the scheduling result. | 02-14-2013 |
20130042251 | Technique of Scheduling Tasks in a System - A technique for scheduling tasks in a system is provided. A method implementation of this technique comprises the steps of providing at least one association between a task and a range of priorities for the task and using the at least one association for the task scheduling. The task scheduling may be provided by a task scheduling unit having access to a memory unit. | 02-14-2013 |
20130055275 | METHOD AND SYSTEM FOR WIRELESS COMMUNICATION BASEBAND PROCESSING - A method and system for prioritized baseband processing in a wireless communication network is disclosed. Parameters affecting processing times for performing tasks associated with different user equipments are evaluated and the tasks are prioritized based on the evaluation. Each task is performed by the baseband processor at a time that is based on the priority assigned to the task. | 02-28-2013 |
20130055276 | TASK SCHEDULING METHOD AND APPARATUS - A task scheduling method and apparatus are provided to execute periodic tasks together with an aperiodic real-time task in a single system, to perform scheduling while satisfying a precedence relation between periodic tasks, and to perform scheduling so that an aperiodic real-time task may be efficiently executed for a residual time left after scheduling of the periodic tasks. Additionally, a component scheduling method and apparatus in robot software are provided. | 02-28-2013 |
20130061232 | Method And Device For Maintaining Data In A Data Storage System Comprising A Plurality Of Data Storage Nodes - A method and device for maintaining data in a data storage system, comprising a plurality of data storage nodes, the method being employed in a storage node in the data storage system and comprising: monitoring and detecting, conditions in the data storage system that imply the need for replication of data between the nodes in the data storage system; initiating replication processes in case such a condition is detected, wherein the replication processes include sending multicast and unicast requests to other storage nodes, said requests including priority flags, receiving multicast and unicast requests from other storage nodes, wherein the received requests include priority flags, ordering the received requests in different queues depending on their priority flags, and dealing with requests in higher priority queues with higher frequency than requests in lower priority queues. | 03-07-2013 |
20130061233 | EFFICIENT METHOD FOR THE SCHEDULING OF WORK LOADS IN A MULTI-CORE COMPUTING ENVIRONMENT - A computer in which a single queue is used to implement all of the scheduling functionalities of shared computer resources in a multi-core computing environment. The length of the queue is determined uniquely by the relationship between the number of available work units and the number of available processing cores. Each work unit in the queue is assigned an execution token. The value of the execution token represents an amount of computing resources allocated for the work unit. Work units having non-zero execution tokens are processed using the computing resources allocate to each one of them. When a running work unit is finished, suspended or blocked, the value of the execution token of at least one other work unit in the queue is adjusted based on the amount of computing resources released by the running work unit. | 03-07-2013 |
20130061234 | Media Player Instance Managed Resource Reduction - Techniques and systems are disclosed for managing computer resources available to multiple running instances of a media player program. The methods include monitoring consumption of computing resources of multiple running instances of a media player program to render respective media content in a graphical user interface of a computing device. The graphical user interface associated with an additional program configured to render additional content, different from the media content, to the graphical user interface. The additional program can be a browser. The methods further include instructing the multiple instances to reduce respective portions of the computing resources consumption upon determining that a requested increase in computer resources consumption of the media player program would cause the computer resources consumption of the media player program to exceed a first predetermined level. | 03-07-2013 |
20130067484 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, RECORDING MEDIUM AND INFORMATION PROCESSING SYSTEM - There is provided an information processing apparatus including a receiver configured to receive a request to perform processing related to a task, from a first information processing apparatus which functions as a client on a network; a scheduler configured to, when a rank of a priority of the scheduler of the information processing apparatus among information processing apparatuses on the network is a first predetermined rank or higher, assign the task to one or a plurality of second information processing apparatuses which function as nodes on the network; and a transmitter configured to transmit a request to execute processing related to the task assigned to the one or the plurality of second information processing apparatuses. | 03-14-2013 |
20130074087 | METHODS, SYSTEMS, AND PHYSICAL COMPUTER STORAGE MEDIA FOR PROCESSING A PLURALITY OF INPUT/OUTPUT REQUEST JOBS - Methods, systems, and physical computer-readable storage medium for processing a plurality of IO request jobs are provided. The method includes determining whether one or more request jobs are not meeting a QoS target, each job of the one or more request jobs having a corresponding priority, selecting a highest priority job from the one or more request jobs, if one or more request jobs are not meeting the QoS target, determining whether the highest priority job has a corresponding effective rate limit imposed thereon, if so, relaxing the corresponding effective rate limit, and if not, selecting one or more lower priority jobs from the one or more request jobs and tightening a corresponding effective limit on the one or more lower priority jobs from the one or more request jobs in accordance with a delay factor limit. | 03-21-2013 |
20130074088 | SCHEDULING AND MANAGEMENT OF COMPUTE TASKS WITH DIFFERENT EXECUTION PRIORITY LEVELS - One embodiment of the present invention sets forth a technique for dynamically scheduling and managing compute tasks with different execution priority levels. The scheduling circuitry organizes the compute tasks into groups based on priority levels. The compute tasks may then be selected for execution using different scheduling schemes, such as round-robin, priority, and partitioned priority. Each group is maintained as a linked list of pointers to compute tasks that are encoded as queue metadata (QMD) stored in memory. A QMD encapsulates the state needed to execute a compute task. When a task is selected for execution by the scheduling circuitry, the QMD is removed for a group and transferred to a table of active compute tasks. Compute tasks are then selected from the active task table for execution by a streaming multiprocessor. | 03-21-2013 |
20130074089 | METHOD AND APPARATUS FOR SCHEDULING RESOURCES IN SYSTEM ARCHITECTURE - The present invention relates to a method and apparatus for scheduling resources in system architecture. In one embodiment, this can be accomplished by storing temporarily jobs form a plurality of queues, where each queue a weight is set up, forming a set of elements, wherein the set size is based on the weights assigned to each queue, selecting one element from the formed set in an order, wherein the order can be predefined or random order and serving at least one job from the plurality of queues, wherein selection of the job is from the queue that corresponds to element of the formed set. | 03-21-2013 |
20130081039 | Resource allocation using entitlements - A data handling apparatus are adapted to facilitate resource allocation, allocating resources upon which objects execute. A data handling apparatus can comprise resource allocation logic and a scheduler. The resource allocation logic can be operable to dynamically set entitlement value for a plurality of resources comprising physical/logical resources and operational resources. The entitlement value are specified as predetermined rights wherein a process of a plurality of processes is entitled to a predetermined percentage of operational resources. The scheduler can be operable to monitor the entitlement value and schedule the processes based on priority of the entitlement values. | 03-28-2013 |
20130081040 | MANUFACTURING PROCESS PRIORITIZATION - A manufacturing process prioritization system. In one embodiment, the system includes at least one computing device adapted to prioritize a very large scale integration (VLSI) process, by performing actions including: querying a database for task-based data associated with a set of manufacturing tasks; applying at least one rule to the task-based data to prioritize a first one of the set of manufacturing tasks over a second one of the set of manufacturing tasks; and providing a set of processing instructions for processing a manufactured product according to the prioritization. | 03-28-2013 |
20130081041 | Circuit arrangement for execution planning in a data processing system - A circuit arrangement and method for a data processing system for executing a plurality of tasks with a central processing unit having a processing capacity allocated to the processing unit; the circuit arrangement being configured to allocate the processing unit to the specific tasks in a time-staggered manner for processing, so that the tasks are processed in an order to be selected and tasks not having a current processing request are skipped over in the order during the processing; the circuit arrangement including a prioritization order control unit to determine the order in which the tasks are executed; and in response to each selection of a task for processing, the order of the tasks being redetermined and the selection being controlled so that for a number N of tasks, a maximum of N time units elapse until an active task is once more allocated processing capacity by the processing unit. | 03-28-2013 |
20130081042 | DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. | 03-28-2013 |
20130091505 | Priority Level Arbitration Method and Device - The present invention discloses a method and device for arbitrating priority levels. The method comprises: setting a plurality of first stage polling arbiters and a second stage priority level arbiter respectively, wherein the number of the first stage polling arbiters is equal to the number of priority levels contained in a plurality of source ends; receiving task request signals for requesting tasks from the plurality of source ends and assigning request tasks with the same priority level to the same first stage polling arbiter; each of the first stage polling arbiters polling the received request tasks with the same priority level respectively to obtain one request task and transmitting the request task to the second stage priority level arbiter; and the second stage priority level arbiter receiving the plurality of request tasks and outputting an output result of request tasks with the highest priority level to a destination end. | 04-11-2013 |
20130091506 | MONITORING PERFORMANCE ON WORKLOAD SCHEDULING SYSTEMS - The present invention relates to the field of enterprise network computing. In particular, it relates to monitoring workload of a workload scheduler. Information defining a plurality of test jobs of low priority is received. The test jobs have respective launch times, and are launched for execution in a data processing system in accordance with said launch times and said low execution priority. The number of test jobs executed within a pre-defined analysis time range is determined A performance decrease warning is issued if the number of executed test jobs is lower than a predetermined threshold number. A workload scheduler discards launching of jobs having a low priority when estimating that a volume of jobs submitted with higher priority is sufficient to keep said scheduling system busy. | 04-11-2013 |
20130104138 | SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes. | 04-25-2013 |
20130104139 | System for Managing Data Collection Processes - A system and process for managing data collection processes is disclosed. An apparatus that incorporates teachings of the present disclosure can include, a data collection system having a controller element that assigns to each of the processes a query interval according to a priority level of the data collection process for requesting use of processing resources, receiving one or more requests from the processes, once per respective query interval, for use of at least a portion of available processing resources, releases at least a portion of the available processing resources to a requesting one of the processes when the use of the available processing resources exceeds a utilization threshold. Additional embodiments are disclosed. | 04-25-2013 |
20130111488 | TASK ASSIGNMENT USING RANKING SUPPORT VECTOR MACHINES | 05-02-2013 |
20130111489 | Entitlement vector for managing resource allocation | 05-02-2013 |
20130117755 | APPARATUSES, SYSTEMS, AND METHODS FOR DISTRIBUTED WORKLOAD SERIALIZATION - Apparatuses, systems, methods, and computer program products are provided for processing workload requests in a distributed computing system. In general, a cooperative workload serialization system is provided that includes a Message Queue that is configured to receive and hold workload requests from a number of requestors and a Request Manager that is in communication with the Message Queue and is configured to direct the processing of the workload requests. The system may include a Culler in communication with the Request Manager, where the Culler is configured to monitor the validity of the workload requests. The Request Manager, in turn, may be configured to remove an indicated workload request from the Message Queue based on information from the Culler that the indicated workload request is not valid. | 05-09-2013 |
20130117756 | TASK SCHEDULING METHOD FOR REAL TIME OPERATING SYSTEM - The present invention relates to a task scheduling method for a real time operating system (RTOS) mounted to an embedded system, and more particularly, to a task scheduling method which allows a programmer to make a CPU reservation for a task. The task scheduling method for a real time operating system, includes: at a scheduling time point, determining whether or not a highest priority of tasks present in a ready queue is a predetermined value K; if the highest priority is determined to be K, applying a reservation based scheduler to perform a scheduling; and if the highest priority is determined not to be K, applying a priority based scheduler to perform a scheduling; the tasks present in the ready queue, the priority of which is K, contains idle CPU reservation allocation information received as a factor when the tasks the priority of which is K are created. | 05-09-2013 |
20130117757 | METHOD AND APPARATUS FOR SCHEDULING APPLICATION PROGRAMS - A method for scheduling an application includes receiving an execution command of at least one application; and receiving task characteristic information of I/O-BOUND and CPU-BOUND for the at least one application. Further, the method for scheduling the application includes performing scheduling for the at least one application by applying the task characteristic information. | 05-09-2013 |
20130132963 | Superseding of Recovery Actions Based on Aggregation of Requests for Automated Sequencing and Cancellation - Command sequencing may be provided. Upon receiving a plurality of action requests, an ordered queue comprising at least some of the plurality of actions may be created. The actions may then be performed in the queue's order. | 05-23-2013 |
20130132964 | ELECTRONIC DEVICE AND METHOD OF OPERATING THE SAME - An electronic device and a method of operating the same are provided. More particularly, in an electronic device and a method of operating the same, by recognizing a keyword of contents, reflecting the contents, and executing an application corresponding to the recognized keyword, an application execution environment corresponding to a user intention is provided. | 05-23-2013 |
20130132965 | STATUS TOOL TO EXPOSE METADATA READ AND WRITE QUEUES - A method to expose status information is provided. The status information is associated with metadata extracted from multimedia files and stored in a metadata database. The metadata information that is extracted from the multimedia files is stored in a read queue to allow a background thread to process the metadata and populate the metadata database. Additionally, the metadata database may be updated to include user-define metadata, which is written back to the multimedia files. The user-defined metadata is included in a write queue and is written to the multimedia files associated with the user-defined metadata. The status of the read and write queues are exposed to a user through a graphical user interface. The status may include the list of multimedia files included in the read and write queues, the priorities of each multimedia file, and the number of remaining multimedia files. | 05-23-2013 |
20130152097 | Resource Health Based Scheduling of Workload Tasks - A computer-implemented method for allocating threads includes: receiving a registration of a workload, the registration including a workload classification and a workload priority;
| 06-13-2013 |
20130152098 | TASK PRIORITY BOOST MANAGEMENT - According to one aspect of the present disclosure, a method and technique for task priority boost management is disclosed. The method includes: responsive to a thread executing in user mode an instruction to boost a priority of the thread, accessing a boost register, the boost register accessible in kernel mode; determining a value of the boost register; and responsive to determining that the boost register holds a non-zero value, boosting the priority of the thread. | 06-13-2013 |
20130152099 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 06-13-2013 |
20130152100 | METHOD TO GUARANTEE REAL TIME PROCESSING OF SOFT REAL-TIME OPERATING SYSTEM - A method to guarantee real time processing of a soft real time operating system in a multicore platform by executing a thread while varying a core in which the thread is executed and apparatus are provided. The method includes assigning priority to a task thread, executing the task thread, determining a core in which the task thread is to be executed, and if the core is determined, transferring the task thread to the determined core. | 06-13-2013 |
20130160017 | Software Mechanisms for Managing Task Scheduling on an Accelerated Processing Device (APD) - Embodiments describe herein provide a method of for managing task scheduling on a accelerated processing device. The method includes executing a first task within the accelerated processing device (APD), monitoring for an interruption of the execution of the first task, and switching to a second task when an interruption is detected. | 06-20-2013 |
20130160018 | METHOD AND SYSTEM FOR THE DYNAMIC ALLOCATION OF RESOURCES BASED ON A MULTI-PHASE NEGOTIATION MECHANISM - A system and method for the dynamic allocation of resources based on multi-phase negotiation mechanism. A resource allocation decision can be made based on an index value computed by a selection index function. A negotiation process can be performed based on a schedule, a number of resources, and a price of resources. A user requesting a resource for a low priority task can negotiate based on the schedule, the user demanding the resource for a medium priority task can negotiate based on the schedule and/or the number of resources, and filially the user requesting the resource for a high priority job can successfully negotiate based on per unit resource price. The multi-phase negotiation mechanism motivates the users to be cooperative among them and improves a cooperative behavior coefficient and an overall user satisfaction rate. | 06-20-2013 |
20130174172 | DATACENTER DATA TRANSFER MANAGEMENT SYSTEM AND METHOD - An exemplary data transfer manager includes a datacenter configured to communicate over at least one link and a scheduler that is configured to schedule a plurality of jobs for communicating data from the datacenter. The scheduler determines a minimum bandwidth requirement of each job and determines a maximum bandwidth limit of each job. The scheduler determines a flex parameter of each job. The flex parameter indicates how much a data transfer rate can vary between adjacent data transfer periods for the job. | 07-04-2013 |
20130174173 | DATA PROCESSOR AND DATA PROCESSING METHOD - A data processing method has a device control thread for each peripheral device capable of an independent operation, a CPU processing thread for each data processing that is performed by a CPU, a control thread equipped with a processing part for constructing an application. The control thread checks an output from the thread related with each processing part, performs with a higher priority from the processing part in which output data of the preprocessing part as a configuration of the application exists and that is near termination, and instructs execution of the each device control thread and the CPU processing thread, and data input/output. Each of device control thread and CPU processing thread processes the data according to the instructions, and sends a processing result and a notification to the control thread. | 07-04-2013 |
20130185728 | SCHEDULING AND EXECUTION OF COMPUTE TASKS - One embodiment of the present invention sets forth a technique for assigning a compute task to a first processor included in a plurality of processors. The technique involves analyzing each compute task in a plurality of compute tasks to identify one or more compute tasks that are eligible for assignment to the first processor, where each compute task is listed in a first table and is associated with a priority value and an allocation order that indicates relative time at which the compute task was added to the first table. The technique further involves selecting a first task compute from the identified one or more compute tasks based on at least one of the priority value and the allocation order, and assigning the first compute task to the first processor for execution. | 07-18-2013 |
20130191836 | SYSTEM AND METHOD FOR DYNAMICALLY COORDINATING TASKS, SCHEDULE PLANNING, AND WORKLOAD MANAGEMENT - Systems and methods for dynamically coordinating a plurality of tasks are provided. Such tasks include a priority rank and at least one of a target date, a classification, an associated application, an associated action, and an associated priority rank adjustment parameter. A particular task can be processed relative to other tasks to generate a first scheduling scheme that defines a prioritized arrangement of the tasks. Based on the priority rank adjustment parameter(s), further scheduling schemes can be generated in lieu of the first scheduling scheme, thereby accounting for the respective priority rank adjustment parameters by influencing the arrangement of the tasks relative to one another. Additionally, based on a status notification, the tasks can be processed to generate a scheduling scheme that accounts for the status notification by influencing the arrangement of the first task and the stored tasks relative to one another. | 07-25-2013 |
20130205299 | APPARATUS AND METHOD FOR SCHEDULING KERNEL EXECUTION ORDER - A method and apparatus for guaranteeing real-time operation of an application program that performs data processing and particular functions in a computer environment using a micro architecture are provided. The apparatus estimates execution times of kernels based on an effective progress index (EPI) of each of the kernels, and determines an execution order of the kernels based on the estimated execution times of the kernels and priority of the kernels. | 08-08-2013 |
20130212591 | TASK SCHEDULING METHOD AND APPARATUS - An apparatus schedules execution of a plurality of tasks by a processor. Each task has an associated periodicity and an associated priority based upon the associated periodicity. The processor executes each of the plurality of tasks periodically according to the associated periodicity of the task. A scheduler, at each of a series of scheduling time points updates the priorities of the plurality of tasks and schedules the tasks that need to be executed in accordance with their priorities. The scheduler identifies an unexecuted task which, at a preceding scheduling time point, was scheduled for execution but which, since that preceding scheduling time point, has not been executed. The scheduler sets the priority of the unexecuted task as greater than the priority of other tasks that have the same periodicity as the unexecuted task and that are not themselves unexecuted tasks. | 08-15-2013 |
20130212592 | SYSTEM AND METHOD FOR TIME-AWARE RUN-TIME TO GUARANTEE TIMELINESS IN COMPONENT-ORIENTED DISTRIBUTED SYSTEMS - A method and system for achieving time-awareness in the highly available, fault-tolerant execution of components in a distributed computing system, without requiring the writer of these components to explicitly write code (such as entity beans or database transactions) to make component state persistent. It is achieved by converting the intrinsically non-deterministic behavior of the distributed system to a deterministic behavior, thus enabling state recovery to be achieved by advantageously efficient checkpoint-replay techniques. The system is deterministic by repeating the execution of the receiving component by processing the messages in the same order as their associated timestamps and time-aware by allowing adjustment of message execution based on time. | 08-15-2013 |
20130219401 | PRIORITIZING JOBS WITHIN A CLOUD COMPUTING ENVIRONMENT - Embodiments of the present invention provide an approach to prioritize jobs (e.g., within a cloud computing environment) so as to maximize positive financial impacts (or to minimize negative financial impacts) for cloud service providers, while not exceeding processing capacity or failing to meet terms of applicable Service Level Agreements (SLAs). Specifically, under the present invention a respective income (i.e., a cost to the customer), a processing need, and set of SLA terms (e.g., predetermined priorities, time constraints, etc.) will be determined for each of a plurality of jobs to be performed. The jobs will then be prioritized in a way that: maximizes cumulative/collective income; stays within the total processing capacity of the cloud computing environment; and meets the SLA terms. | 08-22-2013 |
20130227581 | Technique for Providing Task Priority Related Information Intended for Task Scheduling in a System - A technique for providing task priority related information intended for task scheduling in a system ( | 08-29-2013 |
20130227582 | Prediction Based Priority Scheduling - Systems and methods are provided that schedule task requests within a computing system based upon the history of task requests. The history of task requests can be represented by a historical log that monitors the receipt of high priority task request submissions over time. This historical log in combination with other user defined scheduling rules is used to schedule the task requests. Task requests in the computer system are maintained in a list that can be divided into a hierarchy of queues differentiated by the level of priority associated with the task requests contained within that queue. The user-defined scheduling rules give scheduling priority to the higher priority task requests, and the historical log is used to predict subsequent submissions of high priority task requests so that lower priority task requests that would interfere with the higher priority task requests will be delayed or will not be scheduled for processing. | 08-29-2013 |
20130232496 | GLOBAL AVOIDANCE OF HANG STATES IN MULTI-NODE COMPUTING SYSTEM - Systems, methods, and other embodiments associated with avoiding resource blockages and hang states are described. One example computer-implemented method for a computing system includes determining that a first process is waiting for a resource and is in a blocked state. The resource that the first process is waiting for is identified. A blocking process that is holding the resource is then identified. A priority of the blocking process is compared with a priority the first process. If the priority of the blocking process is lower than the priority of the first process, the priority of the blocking process is increased. In this manner the blocking process can be scheduled for execution sooner and thus release the resource. | 09-05-2013 |
20130239113 | INFORMATION PROCESSING APPARATUS, COMPUTER PRODUCT, AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a memory unit having numbers each specifying an output order and a data memory area corresponding to each number; a setting unit that sets in each data memory area correlating an execution order of a thread with a number specifying the output order, a storage location for a value of a common variable of the thread among threads receiving write requests for the value or the common variable; a first storing unit that stores to the data memory area set for each thread, the value of the common variable for the thread of the execution order corresponding to the number specifying the output order of the data memory area; and a second storing unit that upon completion of ail the threads and In the output order, reads-out each value of the common variable stored to the data memory areas and overwrites a specific storage location. | 09-12-2013 |
20130254775 | EFFICIENT LOCK HAND-OFF IN A SYMMETRIC MULTIPROCESSING SYSTEM - Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor. | 09-26-2013 |
20130263147 | Systems and Methods for Speculative Read Based Data Processing Priority - The present inventions are related to systems and methods for data processing, and more particularly to systems and methods for priority based data processing. | 10-03-2013 |
20130275986 | Data Processing System with Out of Order Transfer - Various embodiments of the present inventions provide systems and methods for data processing with out of order transfer. For example, a data processing system is disclosed that includes a data processor operable to process input blocks of data and to yield corresponding processed output blocks of data, wherein the processed output blocks of data are output from the data processor in an order in which their processing is completed, and a scheduler operable to receive processing priority requests for the input blocks of data and to assign processing resources in the data processor according to the priority requests. | 10-17-2013 |
20130275987 | METHODS AND SYSTEMS FOR QUEUING EVENTS - This disclosure relates to methods and systems for queuing events. In one aspect, a method is disclosed that receives or creates an event and inserts the event into a queue. The method determines at least one property of the event and associates a priority with the event based on the property. The method then processes the event in accordance with its priority. | 10-17-2013 |
20130275988 | HARDWARE MULTI-THREADING CO-SCHEDULING FOR PARALLEL PROCESSING SYSTEMS - A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core. | 10-17-2013 |
20130283284 | OPERATION MANAGEMENT APPARATUS, OPERATION MANAGEMENT METHOD AND OPERATION MANAGEMENT PROGRAM - An operation management apparatus according to an exemplary aspect of the invention includes, an operation memory unit that stores a related operation and a priority corresponding to each of a plurality of control operations, the related operation being a different control operation from the corresponding control operations; an arithmetic processing unit that selects a target operation which is either one of the plurality of the control operations from the operation memory unit based on a predetermined condition, increases the priority of the target operation and increases the priority of the related operation which corresponds to the target operation; and an control operation derivation unit that selects a designated number of the control operations in the priority order from the operation memory unit and displays the selected control operations. | 10-24-2013 |
20130283285 | Adjusting Thread Priority to Improve Throughput between Peer-to-peer (P2P) Devices - In some implementations, a processor is configured to receive a current pending packet number representing a number of packets of data that currently remain to be transferred between two devices, determine whether to adjust a priority of a thread based on the current pending packet number, a previous pending packet number, and a priority pending packet number, and adjust or maintain the priority of the thread based on determining whether to adjust the priority of the thread. The thread is to be executed by the processor to perform a transfer of the packets of data between the two devices, the previous pending packet number represents a number of packets of data that previously remained to be transferred between the two devices, and the priority pending packet number corresponds to the current priority of the thread. | 10-24-2013 |
20130290972 | WORKLOAD MANAGER FOR MAPREDUCE ENVIRONMENTS - A method of managing workloads in MapReduce environments with a system. The system receives job profiles of respective jobs, wherein each job profile describes characteristics of map and reduce tasks. The map tasks produce intermediate results based on the input data, and the reduce tasks produce an output based on the intermediate results. The jobs are ordered according to performance goals into a hierarchy. A minimum quantity of resources is allocated to each job to achieve its performance goal. A plurality of spare resources are allocated to at least one of the jobs. A new job profile having a new performance goal is then received. Next, it is determined whether the new performance goal can be met without deallocating spare resources. Spare resources are re-allocated form the other jobs to the new job to achieve its performance goal without compromising the performance goals of the other jobs. | 10-31-2013 |
20130290973 | PROGRAMMING MODEL FOR TRANSPARENT PARALLELIZATION OF COMBINATORIAL OPTIMIZATION - Each of a plurality of subtasks is configured to explore and assess alternative solutions for a combinatorial optimization problem by a reentrant finite state machine is represented. Each of a plurality of threads is configured to perform operations comprising a subtask until either completion or a blocked state is reached and, in the event a blocked state is reached, to move on to performing another subtask that is not currently in a blocked state. | 10-31-2013 |
20130290974 | WORKFLOW CONTROL OF RESERVATIONS AND REGULAR JOBS USING A FLEXIBLE JOB SCHEDULER - A scheduler receives flexible reservation requests for scheduling in a computing environment comprising consumable resources. The flexible reservation request specifies a duration and a required resource. The consumable resources comprise machine resources and floating resources. The scheduler creates a flexible job for the flexible reservation request and places the flexible job in a prioritized job queue for scheduling, wherein the flexible job is prioritizes relative to at least one regular job in the prioritized job queue. The scheduler adds a reservation set to a waiting state for the flexible reservation request. The scheduler, responsive to detecting the flexible job positioned in the prioritized job queue for scheduling next and detecting a selection of consumable resources available to match the at least one required resource for the duration, transfers the selection of consumable resources to the reservation and sets the reservation to an active state. | 10-31-2013 |
20130290975 | METHOD AND DEVICE FOR DETERMINING PARALLELISM OF TASKS OF A PROGRAM - A method and device for determining parallelism of tasks of a program comprises generating a task data structure to track the tasks and assigning a node of the task data structure to each executing task. Each node includes a task identification number and a wait number. The task identification number uniquely identifies the corresponding task from other currently executing tasks and the wait number corresponds to the task identification number of a node corresponding to the last descendant task of the corresponding task that was executed prior to a wait command. The parallelism of the tasks is determined by comparing the relationship between the tasks. | 10-31-2013 |
20130305252 | METHOD AND SYSTEM FOR HETEROGENEOUS FILTERING FRAMEWORK FOR SHARED MEMORY DATA ACCESS HAZARD REPORTS - A system and method for detecting, filtering, prioritizing and reporting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises initialization information for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a potential conflict by identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. First information associated with a first access and second information associated with the second access to the location is determined. Filter criteria is applied to the first and second information to determine whether the instruction causes a reportable hazard. The instruction is reported when it causes the reportable hazard. | 11-14-2013 |
20130305253 | LOCK CONTROL IN MULTIPLE PROCESSOR SYSTEMS - A computer implemented method executing a plurality of tasks, each task comprising threads and each task being assigned a priority from 1 to a whole number greater than 1, each thread of a task assigned the same priority as the task and each thread being executed by a processor. The method also provides locking and unlocking arranged to lock and unlock data stored by a storage device responsive to such a request from a thread. A method of operating the system comprises maintaining a queue of threads that require access to locked data, maintaining an array comprising, for each priority, duration and/or throughput information for threads of the priority, setting a wait flag for a priority in the array according to a predefined algorithm calculated from the duration and/or throughput information in the array. | 11-14-2013 |
20130305254 | CONTROLLING 32/64-BIT PARALLEL THREAD EXECUTION WITHIN A MICROSOFT OPERATING SYSTEM UTILITY PROGRAM - A method of programming operating system (O/S) utility C and C++ programs within the Microsoft professional development 32/64-bit parallel threads environment, includes providing a computer unit, which can be a 32/64-bit Microsoft PC O/S, or a 32/64-bit Microsoft Server O/S, a Microsoft development tool, which is the Microsoft Visual Studio Development Environment for C and C++ for either the 32-bit O/S or the 64-bit O/S. | 11-14-2013 |
20130305255 | CONTROLLING PRIORITY LEVELS OF PENDING THREADS AWAITING PROCESSING - A data processing apparatus comprises processing circuitry arranged to process processing threads using resources accessible to the processing circuitry. A pipeline is provided for handling at least two pending threads awaiting processing by the processing circuitry. The pipeline includes at least one resource-requesting pipeline stage for requesting access to resources for the pending threads. A priority controller controls priority levels of the pending threads. The priority levels define a priority with which pending threads are granted access to resources. When a pending thread reaches a final pipeline stage, if the request resources are not yet available then the priority level of that thread is raised selectively and the thread is returned to a first pipeline stage of the pipeline. If the requested resources are available then the thread is forwarded from the pipeline. | 11-14-2013 |
20130311998 | SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes. | 11-21-2013 |
20130318533 | METHODS AND SYSTEMS FOR PRESENTING AND ASSIGNING TASKS - Techniques for providing and controlling access to Tasks for prioritized resolution by a plurality of agents, which include receiving the Tasks, each Task having one or more associated characteristics; obtaining an identification of a first agent included in the plurality of agents; displaying to the first agent a first user interface for issuing a request for automated selection of a next Task for action by the first agent; selecting, in real time and in response to the request, the next Task for action by the first agent. The selection is based on a prioritization of the Tasks, wherein the prioritization is based on the identification of the first agent, and the selection does not include a Task displayed to another agent at the time of selection; and displaying to the first agent a second user interface allowing the first agent to take action on the selected next Task. | 11-28-2013 |
20130326528 | RESOURCE STARVATION MANAGEMENT IN A COMPUTER SYSTEM - Provided is a method of managing resource starvation in a computer system. A highest priority task is created in a computer system. The highest priority task identifies a resource starvation causing task in the computer system and reduces current priority of the starvation causing task. | 12-05-2013 |
20130326529 | Optimizing the utilization of computer system's resources - The present invention optimizes the utilization of computer system resources by considering predefined performance targets of multithreaded applications using the resources. The performance and utilization information for a set of multithreaded applications is provided. Using the performance and utilization information, the invention determines overutilized resources. Using the performance information, the invention also identifies threads and corresponding applications using an overutilized resource. The priority of the identified threads using said overutilized resource is adjusted to maximise a number of applications meeting their performance targets. The adjustments of priorities are executed via a channel that provides the performance and utilization information. | 12-05-2013 |
20130326530 | METHOD FOR PACKET FLOW CONTROL USING CREDIT PARAMETERS WITH A PLURALITY OF LIMITS - The present invention relates to a processor and a method for processing a data packet, the method including steps of decreasing a value of a first credit parameter when the data packet is admitted to a processor at least partly based on the value of the first credit parameter and a first limit of the first credit parameter, and increasing the value of the first credit parameter, in dependence on a data storage level in a buffer in which the data packet is stored before being admitted to the processor, the value of the first credit parameter not being increased, so as to become larger than a second limit of the first credit parameter, when the buffer is empty. | 12-05-2013 |
20130339968 | System and Method for Improved Job Processing - A method of processing a job is presented. A packet selector determines a candidate job list including an ordered listing of candidate jobs. Each candidate job in the ordered listing belongs to a communication stream. Jobs in the candidate job list that are eligible for execution are identified by determining whether a preceding job belonging to the same communication stream as the candidate job is present in the candidate job list, and, for each candidate job in the candidate job list, determining whether a preceding job belonging to the same communication stream as the candidate job is being prepared for execution. The packet selector determines a priority for each eligible candidate job in the candidate job list by at least comparing the communication stream of each candidate job to a communication stream of a first job executing within the data processor. | 12-19-2013 |
20130339969 | Scheduling and Decision System - A task-centered scheduling system is disclosed, whereupon a manager could schedule a task with a number of attributes for resources (such as persons, types of rooms, tools, machinery, ingredients, etc.). The system then matches resources with the task so that the task could be completed. For example, if a manager schedules a task for a person with cleaning skills to clean a dirty room with a mop and a bucket with soap, the scheduler will find an available person with cleaning skills, allocate the mop and bucket to that available person, and send that available person to the dirty room to clean it. The system keeps track of the resources in order to know what resources are available for allocating to a task, and different tasks may have different priorities which could cause lower priority tasks to be rescheduled in favor of higher priority tasks. | 12-19-2013 |
20130339970 | WORK PLAN PRIORITIZATION FOR APPLICATION DEVELOPMENT AND MAINTENANCE USING POOLED RESOURCES IN A FACTORY - A computer implemented method, system and/or computer program product schedules execution of work requests through work plan prioritization. One or more work packets are mapped to and assigned to each work request from a group of work requests. A complexity level is derived for and assigned to each work packet, and priority levels of various work requests are determined for each entity from a group of entities. A global priority for the group of work requests is then determined. The global priority and the complexity levels combine to create a priority function, which is used to schedule execution of the work requests. | 12-19-2013 |
20130346993 | JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT - According to one aspect of the present disclosure, a method and technique for job distribution within a grid environment is disclosed. The method includes: receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts; determining resource attributes corresponding to each execution host of the execution clusters; grouping, for each execution cluster, execution hosts based on the resource attributes of the respective execution hosts; defining, for each grouping of execution hosts, a mega-host for the respective execution cluster, the mega-host for a respective execution cluster defining resource attributes based on the resource attributes of the respective grouped execution hosts; determining resource requirements for the jobs; and identifying candidate mega-hosts for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs. | 12-26-2013 |
20140007121 | LIGHT WEIGHT WORKLOAD MANAGEMENT SERVER INTEGRATION | 01-02-2014 |
20140007122 | SYSTEM AND METHOD FOR MANAGING PERFORMANCE OF A MOBILE DEVICE | 01-02-2014 |
20140007123 | METHOD AND DEVICE OF TASK PROCESSING OF ONE SCREEN AND MULTI-FOREGROUND | 01-02-2014 |
20140013330 | MULTIPLE CORE REAL-TIME TASK EXECUTION - A real-time task may initially be performed by a first thread that is executing on a first core of a multi-core processor. A second thread may be initiated to take over the performance of the real-time task on a second core of the multi-core processor while the first thread is performing the real-time task. The performance of the real-time tasks is then transferred from the first thread to the second thread with the execution of the second thread on the second core to perform the real-time task. | 01-09-2014 |
20140013331 | TERMINAL DEVICE, PROCESS MANAGEMENT METHOD, AND RECORDING MEDIUM - A terminal device includes first storage, second storage, and a processor. The first storage is configured to store used resource information which indicates a set of one or more resources to be used by an application installed in the terminal device. The second storage is configured to store association information which associates each particular resource for which access is provided by a particular process, with the particular process. The processor is configured to recognize a process group which is related to the application, and includes a set of one or more particular processes each of which is associated by the association information with a resource included in the set of one or more resources indicated by the used resource information. | 01-09-2014 |
20140019987 | SCHEDULING MAP AND REDUCE TASKS FOR JOBS EXECUTION ACCORDING TO PERFORMANCE GOALS - Allocations of resources are determined for jobs that have map tasks and reduce tasks. The jobs are ordered according to performance goals of the jobs. The tasks of the jobs are scheduled for execution according to the ordering and the allocations of resources for the respective jobs. | 01-16-2014 |
20140040903 | Queue and operator instance threads to losslessly process online input streams events - A queue enqueues an online input stream of events arriving at the queue in real-time. An operator instance has one or more threads to losslessly dequeue and process the events from the queue, and to output processing results of the events in a common output stream. The one or more threads are dynamically instantiated and destantiated to maintain an optimal number of the one or more threads while ensuring that none of the events of the online input stream are dropped. | 02-06-2014 |
20140040904 | METHOD AND APPARATUS FOR IMPROVING PROCESSING PERFORMANCE OF A MULTI-CORE PROCESSOR - A method for managing task execution in a multi-core processor includes employing a spinlock to effect a dynamically enforceable mutual exclusion constraint and employing a multi-processor priority ceiling protocol to effect the dynamically enforceable mutual exclusion constraint to synchronize a plurality of tasks executing in the first and second processing cores of the multi-core processor. | 02-06-2014 |
20140040905 | TASK EXECUTION CONTROLLER, TASK EXECUTION CONTROL SYSTEM, AND TASK EXECUTION CONTROL METHOD - A task execution controller includes a context generating unit that generates context information concerning a user and a surrounding situation of the user; a task managing unit that stores multiple tasks the user attempts to execute, selects a task according to the context information and a predetermined task selection rule, and controls execution of the task; and a service managing unit that confirms services executed by a device used for execution of the task, gives notification of a service corresponding to the execution of the task selected by the task managing unit, to the device and causes the device to perform the service. The task managing unit selects a task by using, as the task selection rule, information of priority levels of tasks and an execution-related dependency relation between tasks preset among the tasks. | 02-06-2014 |
20140040906 | OPTIMIZING PREEMPTIVE OPERATING SYSTEM WITH MOTION SENSING - A method and apparatus to provide a scheduler comprising determining a current use characteristic for the mobile device based on motion information and active applications, and scheduling a task based on the current use characteristics. | 02-06-2014 |
20140047448 | SYSTEMS AND METHODS FOR LIMITING USER CUSTOMIZATION OF TASK WORKFLOW IN A CONDITION BASED HEALTH MAINTENANCE SYSTEM - Systems and methods are provided for customizing workflow in a condition based health maintenance (“CBM”) system computing node. The computerized method comprises identifying a first standardized executable application module (“SEAM”), wherein the first SEAM is configured to generate a first event associated with particular data being processed by the first SEAM and identifying a second SEAM, wherein the second SEAM is configured to generate a subsequent event associated with the particular data processed by the first SEAM. The computerized method further comprises creating a quasi-state machine associating a unique responses to the first event and associating a unique responses to the subsequent event, and installing the quasi-state machine into the SDS of the computing node from which the workflow service state machine retrieves the one or more unique responses from the quasi-state machine to the first event for processing by the second SEAM to produce the subsequent second event. | 02-13-2014 |
20140047449 | SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes. | 02-13-2014 |
20140053162 | THREAD PROCESSING METHOD AND THREAD PROCESSING SYSTEM - A thread processing method is executed by a specific apparatus included among a plurality of apparatuses, and includes assigning one thread among a plurality of threads to the apparatuses, respectively; acquiring first time information that indicates a time at which the specific apparatus receives an execution result of a corresponding thread from each of the apparatuses; and setting a priority level of an access right to access shared memory that is shared by the apparatuses and the specific apparatus, the setting being based on the first time information and second time information that indicates a time at which reception of execution results of the threads from the apparatuses ends. | 02-20-2014 |
20140059557 | QUEUE WITH SEGMENTS FOR TASK MANAGEMENT - A method that includes configuring a queue into a plurality of segments, wherein each segment is associated with a depth factor which defines number of entries of task elements capable of being added in each segment, and wherein each segment is associated with a requirement factor; generating a plurality of task elements, each task element having an importance factor; and if a value of an importance factor of a task element is at least equal to a value of a requirement factor of a segment with an available entry to add the task element, then adding the task element in the entry of the segment. | 02-27-2014 |
20140059558 | TASK SCHEDULING IN BIG AND LITTLE CORES - One aspect provides a method including: identifying a task to be scheduled for execution on an information handling device having two or more cores of different size; determining an appropriate scheduling of the task for execution on the two or more of cores of different size, wherein the appropriate scheduling of the task is determined via a core signature for the task; directing the task to an appropriate core for execution based on the appropriate scheduling determined; and executing the task on the appropriate core. Other aspects are described and claimed. | 02-27-2014 |
20140068623 | INFORMATION PROCESSING APPARATUS, COMPUTER PROGRAM, AND METHOD FOR CONTROLLING EXECUTION OF JOBS - An information processing apparatus submits jobs for execution on a server. Jobs are classified into a plurality of groups, and these groups are ranked in ascending order of workload that the groups of jobs impose on the server. A processor in the information processing apparatus counts ongoing jobs that are currently executed on the server and belong to a specified number of top-ranked groups. The processor designates pending jobs that belong to other groups than the specified number of top-ranked groups and suspends submission of processing requests of the designated pending jobs to the server, when the number of ongoing jobs is greater than or equal to a threshold and when there are one or more pending jobs that belong to the specified number of top-ranked groups. | 03-06-2014 |
20140089931 | THREAD LIVELOCK UNIT - Method, apparatus, and system embodiments to assign priority to a thread when the thread is otherwise unable to proceed with instruction retirement. For at least one embodiment, the thread is one of a plurality of active threads in a multiprocessor system that includes memory livelock breaker logic and/or starvation avoidance logic. Other embodiments are also described and claimed. | 03-27-2014 |
20140096139 | WORKLOAD MANAGEMENT CONSIDERING HARDWARE RELIABILITY - A method identifies uptime for each of a plurality of components within a cluster of nodes, and determines a reliability level for each of the plurality of components, where the reliability level of each component is determined by comparing the identified uptime for the component with mean-time-between-failure data for components of the same component type. The method also determines a priority level and a job type for a job to be scheduled. Then, at least one target component type is selected in consideration of the job type, and a target reliability level for the at least one target component type is selected in consideration of the priority level. The job is then scheduled on one of the nodes that includes a component of the at least one target component type having the target reliability level. | 04-03-2014 |
20140096140 | MANAGING A SERVICE PROVIDER'S CUSTOMER QUEUE - A method for scheduling a service. The method includes receiving a request for service at a service provider location from a requester; analyzing the request for service and generating a passcode; providing the passcode to the requester, the passcode including an estimated time when attendance is requested at the service provider; prioritizing the passcode according to one or more business rules; periodically updating the estimated time corresponding to the passcode when attendance is requested at the service provider location according to the one or more business rules; and notifying the requester of the most recent estimated time when attendance is requested at the service provider location. The method may be performed on one or more computing devices. Also included is a system for scheduling a service and a computer program product. | 04-03-2014 |
20140096141 | EFFICIENT ROLLBACK AND RETRY OF CONFLICTED SPECULATIVE THREADS USING DISTRIBUTED TOKENS - A method for rolling back speculative threads in symmetric-multiprocessing (SMP) environments is disclosed. In one embodiment, such a method includes detecting an aborted thread at runtime and determining whether the aborted thread is an oldest aborted thread. In the event the aborted thread is the oldest aborted thread, the method sets a high-priority request for allocation to an absolute thread number associated with the oldest aborted thread. The method further detects that the high-priority request is set and, in response, modifies a local allocation token of the oldest aborted thread. The modification prompts the oldest aborted thread to retry a work unit associated with its absolute thread number. The oldest aborted thread subsequently initiates the retry of a successor thread by updating the successor thread's local allocation token. A corresponding apparatus and computer program product are also disclosed. | 04-03-2014 |
20140101663 | METHOD AND APPARATUS IMPLEMENTED IN PROCESSORS FOR REAL-TIME SCHEDULING AND TASK ORGANIZATION BASED ON RESPONSE TIME ORDER OF MAGNITUDE - A task scheduling method is disclosed, where each processor core is programmed with a short list of priorities, each associated with a minimum response time. The minimum response times for adjacent priorities are different by at least one order of magnitude. Each process is assigned a priority based on how its expected response time compares with the minimum response times of the priorities. Lower priorities may be assigned a timeslice period that is a fraction of the minimum response time. Also disclosed is a task division method of dividing a complex task into multiple tasks is; one of the tasks is an input gathering authority task having a higher priority, and it provides inputs to the other tasks which have a lower priority. A method that permits orderly shutdown or scaling back of task activities in case of resource emergencies is also described. | 04-10-2014 |
20140109102 | TECHNIQUE FOR IMPROVING PERFORMANCE IN MULTI-THREADED PROCESSING UNITS - A multi-threaded processing unit includes a hardware pre-processor coupled to one or more processing engines (e.g., copy engines, GPCs, etc.) that implement pre-emption techniques by dividing tasks into smaller subtasks and scheduling subtasks on the processing engines based on the priority of the tasks. By limiting the size of the subtasks, higher priority tasks may be executed quickly without switching the context state of the processing engine. Tasks may be subdivided based on a threshold size or by taking into account other consideration such as physical boundaries of the memory system. | 04-17-2014 |
20140115595 | SYSTEM AND METHOD FOR CONTROLLED SHARING OF CONSUMABLE RESOURCES IN A COMPUTER CLUSTER - In one embodiment, a method includes empirically analyzing a set of active reservations and a current set of consumable resources belonging to a class of consumable resources. Each active reservation is of a managed task type and includes a group of one or more tasks requiring access to a consumable resource of the class. The method further includes, based on the empirically analyzing, clocking the set of active reservations each clocking cycle. In addition, the method includes, responsive to the clocking, sorting a priority queue of the set of active reservations. | 04-24-2014 |
20140123150 | HARDWARE SCHEDULING OF ORDERED CRITICAL CODE SECTIONS - One embodiment sets forth a technique for scheduling the execution of ordered critical code sections by multiple threads. A multithreaded processor includes an instruction scheduling unit that is configured to schedule threads to process ordered critical code sections. A ordered critical code section is preceded by a barrier instruction and when all of the threads have reached the barrier instruction, the instruction scheduling unit controls the thread execution order by selecting each thread for execution based on logical identifiers associated with the threads. The logical identifiers are mapped to physical identifiers that are referenced by the multithreaded processor during execution of the threads. The logical identifiers are used by the instruction scheduling unit to control the order in which the threads execute the ordered critical code section. | 05-01-2014 |
20140123151 | APPLICATION PRIORITIZATION - Among other things, one or more techniques and/or systems are provided for application prioritization. For example, an operating system of a computing device may contemporaneously host one or more applications, which may compete for computing resources, such as CPU cycles, I/O operations, memory access, and/or network bandwidth. Accordingly, an application (e.g., a background task or service) may be placed within a de-prioritized operating mode during launch and/or during execution, which may result in the application receiving a relatively lower priority when competing with applications placed within a standard operating mode for access to computing resources. In this way, an application placed within a standard operating mode (e.g., a foreground application currently interacted with by a user) may have priority to computing resources over the de-prioritized application, such that the application within the standard operating mode may provide enhanced performance based upon having priority to computing resources. | 05-01-2014 |
20140123152 | EFFICIENT ROLLBACK AND RETRY OF CONFLICTED SPECULATIVE THREADS WITH HARDWARE SUPPORT - A method for rolling back speculative threads in symmetric-multiprocessing (SMP) environments is disclosed. In one embodiment, such a method includes detecting an aborted thread at runtime and determining whether the aborted thread is an oldest aborted thread. In the event the aborted thread is the oldest aborted thread, the method sets a high-priority request for allocation to an absolute thread number associated with the oldest aborted thread. The method further detects that the high-priority request is set and, in response, clears the high-priority request and sets an allocation token to the absolute thread number associated with the oldest aborted thread, thereby allowing the oldest aborted thread to retry a work unit associated with the absolute thread number. A corresponding apparatus and computer program product are also disclosed. | 05-01-2014 |
20140123153 | EFFICIENT ROLLBACK AND RETRY OF CONFLICTED SPECULATIVE THREADS USING DISTRIBUTED TOKENS - A method for rolling back speculative threads in symmetric-multiprocessing (SMP) environments is disclosed. In one embodiment, such a method includes detecting an aborted thread at runtime and determining whether the aborted thread is an oldest aborted thread. In the event the aborted thread is the oldest aborted thread, the method sets a high-priority request for allocation to an absolute thread number associated with the oldest aborted thread. The method further detects that the high-priority request is set and, in response, modifies a local allocation token of the oldest aborted thread. The modification prompts the oldest aborted thread to retry a work unit associated with its absolute thread number. The oldest aborted thread subsequently initiates the retry of a successor thread by updating the successor thread's local allocation token. A corresponding apparatus and computer program product are also disclosed. | 05-01-2014 |
20140137128 | Method of Scheduling Tasks for Memories and Memory System Thereof - A method of scheduling a plurality of tasks for a plurality of memories in a memory system is disclosed. The method includes classifying each task among the plurality of tasks to a task type among a plurality of task types, disposing a plurality of task queues according to the plurality of task types wherein each task queue stores tasks to be executed within the plurality of tasks, assigning a priority for each task type among the plurality of task types, disposing at least one execution queue; and converting a first task stored in a first task queue among the plurality of task queues into at least one command to be stored in a first execution queue among the at least one execution queue, wherein the at least one command is executed according to the priority of a first task type corresponding to the first task queue. | 05-15-2014 |
20140137129 | METHOD AND APPARATUS FOR EFFICIENT EXECUTION OF CONCURRENT PROCESSES ON A MULTITHREADED MESSAGE PASSING SYSTEM - A graph analytics appliance can be employed to extract data from a graph database in an efficient manner. The graph analytics appliance includes a router, a worklist scheduler, a processing unit, and an input/output unit. The router receives an abstraction program including a plurality of parallel algorithms for a query request from an abstraction program compiler residing on computational node or the graph analytics appliance. The worklist scheduler generates a prioritized plurality of parallel threads for executing the query request from the plurality of parallel algorithms. The processing unit executes multiple threads selected from the prioritized plurality of parallel threads. The input/output unit communicates with a graph database. | 05-15-2014 |
20140137130 | METHOD AND APPARATUS FOR EFFICIENT EXECUTION OF CONCURRENT PROCESSES ON A MULTITHREADED MESSAGE PASSING SYSTEM - A graph analytics appliance can be employed to extract data from a graph database in an efficient manner. The graph analytics appliance includes a router, a worklist scheduler, a processing unit, and an input/output unit. The router receives an abstraction program including a plurality of parallel algorithms for a query request from an abstraction program compiler residing on computational node or the graph analytics appliance. The worklist scheduler generates a prioritized plurality of parallel threads for executing the query request from the plurality of parallel algorithms. The processing unit executes multiple threads selected from the prioritized plurality of parallel threads. The input/output unit communicates with a graph database. | 05-15-2014 |
20140149990 | SCHEDULING THREADS - Scheduling threads in a multi-threaded/multi-core processor having a given instruction window, and scheduling a predefined number N of threads among a set of M active threads in each context switch interval are provided. The actual power consumption of each running thread during a given context switch interval is determined, and a predefined priority level is associated with each thread among the active threads based on the actual power consumption determined for the threads. The power consumption expected for each active thread during the next context switch interval in the current instruction window (CIW_Power_Th) is predicted, and a set of threads to be scheduled among the active threads are selected from the priority level associated with each active thread and the power consumption predicted for each active thread in the current instruction window. | 05-29-2014 |
20140157280 | SCHEDULING METHOD AND SCHEDULING SYSTEM - A scheduling method includes determining whether priority of an application to be activated is of a given priority, the determining being performed by a first data processing apparatus that is included in a first group having at least one data processing apparatus; transferring to a second data processing apparatus that is included in any one among a second group and the first group, a predetermined function of the first data processing apparatus so as to execute the application by the first data processing apparatus, the transferring being performed when the priority of the application is of the given priority, and the first and the second groups being among a plurality of groups that each includes at least one data processing apparatus; and placing the application in an execution queue of the first data processing apparatus, when the priority of the application is not the given priority. | 06-05-2014 |
20140165070 | RANKING AND SCHEDULING OF MONITORING TASKS - Systems, methods, and machine-readable and executable instructions are provided for dynamically ranking and scheduling monitoring tasks. Dynamically ranking and scheduling monitoring tasks can include determining an updated ranking for each of a number of monitoring tasks, where the updated ranking can include analyzing historical measurements of each of the number of monitoring tasks. An order of execution can be scheduled for each of the number of monitoring tasks based on the updated ranking for each of the number of monitoring tasks. | 06-12-2014 |
20140173610 | METHODS AND APPARATUS FOR MANAGING AND CONTROLLING POWER CONSUMPTION AND HEAT GENERATION IN COMPUTER SYSTEMS - Methods and systems of operating a computer system including a processor are disclosed. In one aspect, a method includes providing a discretized operating system for controlling applications executed by the computer system, and replacing an idle task of the discretized operating system with a substitute idle task that causes the processor to enter a dormant mode, a priority level of the substitute idle task being the same as a priority level of the idle task. | 06-19-2014 |
20140181827 | System and Method for Implementing Scalable Contention-Adaptive Statistics Counters - The systems and methods described herein may implement scalable statistics counters that are adaptive to the amount of contention for the counters. The counters may be accessible within transactions. Methods for determining whether or when to increment the counters in response to initiation of an increment operation and/or methods for updating the counters may be selected dependent on current, recent, or historical amounts of contention. Various contention management policies or retry conditions may be applied to select between multiple methods. One counter may include a precise counter portion that is incremented under low contention and a probabilistic counter portion that is updated under high contention. Amounts by which probabilistic counters are incremented may be contention-dependent. Another counter may include a node identifier portion that encourages consecutive increments by threads on a single node only when under contention. Another counter may be inflated in response to contention for the counter. | 06-26-2014 |
20140181828 | PROCESSOR PROVISIONING BY A MIDDLEWARE PROCESSING SYSTEM - A middleware processor provisioning process provisions a plurality of processors in a multi-processor environment. The processors themselves may be subdivided in to one or more partitions or processing instances for which a single processing queue is created and a single kernel thread is started. User processing requests are portioned and dispatched across the plurality of processing queues and are serviced by the corresponding kernel process, thereby efficiently using available processing resources while servicing the user processing requests in a desired manner. | 06-26-2014 |
20140189698 | APPROACH FOR A CONFIGURABLE PHASE-BASED PRIORITY SCHEDULER - A streaming multiprocessor (SM) in a parallel processing subsystem schedules priority among a plurality of threads. The SM retrieves a priority descriptor associated with a thread group, and determines whether the thread group and a second thread group are both operating in the same phase. If so, then the method determines whether the priority descriptor of the thread group indicates a higher priority than the priority descriptor of the second thread group. If so, the SM skews the thread group relative to the second thread group such that the thread groups operate in different phases, otherwise the SM increases the priority of the thread group. f the thread groups are not operating in the same phase, then the SM increases the priority of the thread group. One advantage of the disclosed techniques is that thread groups execute with increased efficiency, resulting in improved processor performance. | 07-03-2014 |
20140189699 | SCALABLE THREAD LOCKING WITH CUSTOMIZABLE SPINNING - Embodiments described herein are directed to dynamically controlling the number of spins for a selected processing thread among a plurality of processing threads. A computer system tracks both the number of waiting processing threads and each thread's turn, wherein a selected thread's turn comprises the total number of waiting processing threads after the selected thread's arrival at the processor. Next, the computer system determines, based the selected thread's turn, the number of spins that are to occur before the selected thread checks for an available thread lock. The computer system also, based on the selected thread's turn, changes the number of spins, such that the number of spins for the selected thread is a function of the number of waiting processing threads and processors in the computer system. | 07-03-2014 |
20140196046 | SCHEDULING AND/OR ORGANIZING TASK EXECUTION FOR A TARGET COMPUTING PLATFORM - Techniques are generally described relating to methods, apparatuses and articles of manufactures for scheduling and/or organizing execution of tasks on a computing platform. In various embodiments, the method may include identifying successively one or more critical time intervals, and scheduling and/or organizing task execution for each of the one or more identified critical time intervals. In various embodiments, one or more tasks to be executed may be scheduled to execute based in part on their execution completion deadlines. In various embodiments, organizing one or more tasks to execute may include selecting a virtual operating mode of the platform using multiple operating speeds lying on a convexity energy-speed envelope of the platform. Intra-task delay caused by switching operating mode may be considered. Other embodiments may also be described and/or claimed. | 07-10-2014 |
20140196047 | COMPUTING JOB MANAGEMENT BASED ON PRIORITY AND QUOTA - In one embodiment, the invention provides a method of managing a computing job based on a job priority and a submitter quota. | 07-10-2014 |
20140201750 | SERVICE PROVIDER CLASS APPLICATION SCALABILITY AND HIGH AVAILABILITY AND PROCESSING PRIORITIZATION USING A WEIGHTED LOAD DISTRIBUTOR AND THROTTLE MIDDLEWARE - Processing of tickets received by a ticket processing system is performed by allowing processes running on one or more hosts to access a ticket processing table to retrieve and process the tickets. A weighted load distributor (WLD) grants weighted round robin turn access to the processes running on the hosts. The WLDs running on different hosts coordinate so that a primary WLD is selected that is responsible for distributing turn access to the ticket processing table to various requesting processes. The hosts use a throttle to determine the real-time availability of resources for the hosts. The throttle determines whether a process should be allowed to proceed with processing tasks associated with a particular ticket based on resource costs associated with the required processing, as well as resources available to the respective host and ticket priority. | 07-17-2014 |
20140201751 | PROCESSOR PROVISIONING BY A MIDDLEWARE PROCESSING SYSTEM - A middleware processor provisioning process provisions a plurality of processors in a multi-processor environment. The processors themselves may be subdivided in to one or more partitions or processing instances for which a single processing queue is created and a single kernel thread is started. User processing requests are portioned and dispatched across the plurality of processing queues and are serviced by the corresponding kernel process, thereby efficiently using available processing resources while servicing the user processing requests in a desired manner. | 07-17-2014 |
20140208327 | METHOD FOR SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS - A method is disclosed to manage a multi-processor system with one or more manycore devices, by managing real-time bag-of-tasks applications for a cluster, wherein each task runs on a single server node, and uses the offload programming model, and wherein each task has a deadline and three specific resource requirements: total processing time, a certain number of manycore devices and peak memory on each device; when a new task arrives, querying each node scheduler to determine which node can best accept the task and each node scheduler responds with an estimated completion time and a confidence level, wherein the node schedulers use an urgency-based heuristic to schedule each task and its offloads; responding to an accept/reject query phase, wherein the cluster scheduler send the task requirements to each node and queries if the node can accept the task with an estimated completion time and confidence level; and scheduling tasks and offloads using a aging and urgency-based heuristic, wherein the aging guarantees fairness, and the urgency prioritizes tasks and offloads so that maximal deadlines are met. | 07-24-2014 |
20140208328 | METHOD FOR TERMINAL ACCELERATION, TERMINAL AND STORAGE MEDIUM - A method for terminal acceleration, a terminal and a storage medium is provided. The method includes steps of: detecting a memory resource occupied by all running application processes; determining whether the memory resource occupied by all running application processes reaches or is greater than a preset memory threshold; and terminating the running of at least one of all the running application processes according to the preset terminating conditions, when the memory resource occupied by all the running application processes reaches or is greater than the preset memory threshold, so that the terminal can be automatically accelerated according to the current utilization condition of its memory and the running application processes, the operating speed of the terminal may be improved, and the functions of the terminal may be further diversified. | 07-24-2014 |
20140215479 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR SCHEDULING PROCESSING JOBS TO RUN IN A COMPUTER SYSTEM - A method includes, in a program that includes a defined number of job slots for data updating processing jobs, scheduling a first job in one of the slots, and executing the first job, wherein the first job includes scanning a list of additional jobs and scheduling those additional jobs for execution, further wherein a total number of the additional jobs in the program exceeds the defined number of job slots. | 07-31-2014 |
20140215480 | TASK SCHEDULING BASED ON THERMAL CONDITIONS OF LOCATIONS OF PROCESSORS - Provided is a computer system including a first processor disposed in a first zone, a second processor disposed in a second zone, a prioritizing unit, and a scheduling unit. The prioritizing unit prioritizes the first processor and the second processor based on the thermal conditions of the first zone and the second zone, respectively. The scheduling unit schedules a task to one of the first processor and the second processor according to the priority provided by the prioritizing unit. | 07-31-2014 |
20140223443 | DETERMINING A RELATIVE PRIORITY FOR A JOB USING CONTEXT AND ENVIRONMENTAL CONSIDERATIONS - A method, system and computer program product for determining a relative priority for a job. A “policy” is selected based on the job itself and the reason that the job is being executed, where the policy includes a priority range for the job and for an application. A priority for the job that is within the priority range of the job as established by the selected policy is determined based on environmental and context considerations. This job priority is then adjusted based on the priority of the application (within the priority range as established by the policy) becoming the job's final priority. By formulating a priority that more accurately reflects the true priority or importance of the job by taking into consideration the environmental and context considerations, job managers will now be able to process these jobs in a more efficient manner. | 08-07-2014 |
20140237476 | CENTRALIZED TASK SCHEDULING - A method and apparatus that schedules and manages a background task for device is described. In an exemplary embodiment, the device registers the background task, where the registering includes storing execution criteria for the background task. The execution criteria indicates a criterion for launching the background task and the execution criteria based on a component status of the device. The device further monitors the running state of the device for an occurrence of the execution criteria. If the execution criteria occurs, the device determines an available headroom with the device in order to perform the background task and launches the background task if the background task importance is greater than the available device headroom, where the background task importance is a measure of how important it is for the device to run the background task. | 08-21-2014 |
20140237477 | SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS - Methods and systems for scheduling jobs to manycore nodes in a cluster include selecting a job to run according to the job's wait time and the job's expected execution time; sending job requirements to all nodes in a cluster, where each node includes a manycore processor; determining at each node whether said node has sufficient resources to ever satisfy the job requirements and, if no node has sufficient resources, deleting the job; creating a list of nodes that have sufficient free resources at a present time to satisfy the job requirements; and assigning the job to a node, based on a difference between an expected execution time and associated confidence value for each node and a hypothetical fastest execution time and associated hypothetical maximum confidence value. | 08-21-2014 |
20140237478 | System and Method for Input Data Load Adaptive Parallel Processing - Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation. | 08-21-2014 |
20140245311 | ADAPTIVE PARTITIONING FOR OPERATING SYSTEM - An adaptive partition scheduler is a priority-based scheduler that also provides execution time guarantees (fair-share). Execution time guarantees apply to threads or groups of threads when the system is overloaded. When the system is not overloaded, threads are scheduled based strictly on priority, maintaining strict real-time behavior. When the system is overloaded, threads are scheduled based priority of threads that are in a ready state and based on the available guaranteed processor time budget of the adaptive partition associated with each thread. | 08-28-2014 |
20140245312 | SYSTEM AND METHOD FOR SUPPORTING COOPERATIVE CONCURRENCY IN A MIDDLEWARE MACHINE ENVIRONMENT - A system and method can support cooperative concurrency in a priority queue. The priority queue, which includes a calendar ring and a fast lane, can detect one or more threads that contend to claim one or more requests in the priority queue. Then, a victim thread can place a request in the fast lane in the priority queue, and release a contending thread, which proceeds to consume the request in the fast lane. | 08-28-2014 |
20140245313 | SYSTEM AND METHOD FOR USING A SEQUENCER IN A CONCURRENT PRIORITY QUEUE - A system and method can support a concurrent priority queue. The concurrent priority queue allows a plurality of threads to interact with the priority queue. The priority queue can use a sequencer to detect and order a plurality of threads that contend for one or more requests in the priority queue. Furthermore, the priority queue operates to reduce the contention among the plurality of threads. | 08-28-2014 |
20140245314 | METHODS AND APPARATUS FOR ACHIEVING THERMAL MANAGEMENT USING PROCESSING TASK SCHEDULING - The present invention provides apparatus and methods to perform thermal management in a computing environment. In one embodiment, thermal attributes are associated with operations and/or processing components, and the operations are scheduled for processing by the components so that a thermal threshold is not exceeded. In another embodiment, hot and cool queues are provided for selected operations, and the processing components can select operations from the appropriate queue so that the thermal threshold is not exceeded. | 08-28-2014 |
20140250438 | SCHEDULING METHOD IN MULTIPROCESSOR APPARATUS AND METHOD OF ASSIGNING PRIORITIES TO TASKS USING PSEUDO-DEADLINES IN MULTIPROCESSOR APPARATUS - Provided are a scheduling method in a multiprocessor apparatus and a method of assigning priorities to tasks using pseudo-deadlines in a multiprocessor apparatus. The scheduling method includes releasing tasks ( | 09-04-2014 |
20140282572 | TASK SCHEDULING WITH PRECEDENCE RELATIONSHIPS IN MULTICORE SYSTEMS - A method for assigning tasks comprises receiving a set of tasks, modifying a deadline for each task based on execution ordering relationship of the tasks, ordering the tasks in increasing order based on the modified deadlines for the tasks, partitioning the ordered tasks using one of non-preemptive scheduling and preemptive scheduling based on a type of multicore processing environment, and assigning the partitioned tasks to one or more cores of a multicore electronic device based on results of the partitioning. | 09-18-2014 |
20140282573 | RESOLVING DEPLOYMENT CONFLICTS IN HETEROGENEOUS ENVIRONMENTS - Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is resolved between the first application and at least a second application. | 09-18-2014 |
20140282574 | System and Method for Implementing Constrained Data-Driven Parallelism - Systems and methods for implementing constrained data-driven parallelism may provide programmers with mechanisms for controlling the execution order and/or interleaving of tasks spawned during execution. For example, a programmer may define a task group that includes a single task, and the single task may define a direct or indirect trigger that causes another task to be spawned (e.g., in response to a modification of data specified in the trigger). Tasks spawned by a given task may be added to the same task group as the given task. A deferred keyword may control whether a spawned task is to be executed in the current execution phase or its execution is to be deferred to a subsequent execution phase for the task group. Execution of all tasks executing in the current execution phase may need to be complete before the execution of tasks in the next phase can begin. | 09-18-2014 |
20140282575 | METHOD AND APPARATUS TO AVOID DEADLOCK DURING INSTRUCTION SCHEDULING USING DYNAMIC PORT REMAPPING - A method for performing dynamic port remapping during instruction scheduling in an out of order microprocessor is disclosed. The method comprises selecting and dispatching a plurality of instructions from a plurality of select ports in a scheduler module in first clock cycle. Next, it comprises determining if a first physical register file unit has capacity to support instructions dispatched in the first clock cycle. Further, it comprises supplying a response back to logic circuitry between the plurality of select ports and a plurality of execution ports, wherein the logic circuitry is operable to re-map select ports in the scheduler module to execution ports based on the response. Finally, responsive to a determination that the first physical register file unit is full, the method comprises re-mapping at least one select port connecting with an execution unit in the first physical register file unit to a second physical register file unit. | 09-18-2014 |
20140282576 | EVENT-DRIVEN COMPUTATION - An apparatus for high-performance parallel computation, includes plural computation nodes, each having dispatch units, memories in communication with the dispatch units, and processors, each of which is in communication with the memories and the dispatch units. Each dispatch unit is configured to recognize, as ready for execution, one or more computational tasks that have become ready for execution as a result of counted remote writes into the memories. Each of the dispatch units is configured to receive a dispatch request from a processor and to determine whether there exist one or more computational tasks that are both ready and available for execution by the processor. | 09-18-2014 |
20140289732 | WORKLOAD ROUTING FOR MANAGING ENERGY IN A DATA CENTER - Approaches that manage energy in a data center are provided. In one embodiment, there is an energy management tool, including an analysis component configured to determine a current energy profile of each of a plurality of systems within the data center, the current energy profile comprising an overall rating expressed as an integer value, the overall rating calculated based on a current workload usage and environmental conditions surrounding each of the plurality of systems; and a priority component configured to prioritize a routing of a workload to a set of systems from the plurality of systems within the data center having the least amount of energy present based on a comparison of the overall ratings for each of the plurality of systems within the data center. | 09-25-2014 |
20140298344 | TASK SCHEDULING BASED ON THERMAL CONDITIONS OF LOCATIONS OF PROCESSORS - A method of prioritizing processing units in a system for task scheduling includes, for each processing unit of a plurality of processing units in the system, determining a value that represents a thermal condition of a location of the processing unit. It is determined which of the plurality of processing units is not fully loaded and is in a location with a most favorable thermal condition based on the value of the processing unit that represents thermal conditions of the location of the processing unit. A task is scheduled to the processing unit determined to be not fully loaded and in a location with a most favorable thermal condition based on the value of the processing unit that represents thermal conditions of the location of the processing unit. | 10-02-2014 |
20140310718 | METHOD, COMPUTER PROGRAM AND DEVICE FOR ALLOCATING COMPUTER RESOURCES OF A CLUSTER FOR EXECUTING A TASK SUBMITTED TO SAID CLUSTER - A method and device for allocating computer resources of a cluster for carrying out at least one job controlled by the cluster is disclosed. In one aspect, the method includes determining the placement of the job from physical features of the job and from physical features and availability of the computer resources of at least one processing area of the cluster. The method further includes receiving energy state features of the computer resources of at least the processing area; determining a recommended placement of the at least one job by correlating the physical features of the job, the physical features, availability and energy state of the computer resources on the basis of predetermined rules; and deducing, from the predetermined recommended placement, a recommended allocation list of the computer resources for carrying out the job in the cluster. | 10-16-2014 |
20140317631 | RESERVATION SCHEDULER FOR REAL-TIME OPERATING SYSTEMS IN WIRELESS SENSOR NETWORKS - A method of scheduling tasks for a Real-Time Operating System (RTOS) in a low-power, wireless, mesh network may include receiving, at a scheduler for the RTOS, a plurality of tasks to schedule for execution by one or more processors. The plurality of tasks may include a first task; the first task may be associated with an expected execution interval; and the expected execution interval may indicate an expected length of time for the one or more processors to execute the first task. The method may also include scheduling the plurality of tasks for execution by the one or more processors. The first task may be scheduled using the expected execution time such that the first task is executed without being interrupted by others of the plurality of tasks. | 10-23-2014 |
20140317632 | CONTROLLING TASKS PERFORMED BY A COMPUTING SYSTEM - A graph-based program specification specifies at least a partial ordering among a plurality of tasks represented by its nodes. Executing a specified program includes: executing a first subroutine corresponding to a first task, including a first task section for performing the first task; storing state information indicating a state of the first task selected from a set of possible states that includes: a pending state in which the first task section is waiting to perform the first task, and a suppressed state in which the first task section has been prevented from performing the first task; and executing a second subroutine corresponding to a second task, including a second task section for performing the second task, and a control section that controls execution of the second task section based at least in part on the state of the first task indicated by the stored state information. | 10-23-2014 |
20140317633 | Virtualizing A Processor Time Counter - In one embodiment, a processor includes at least one execution unit to execute instructions, and a logic to obtain a value of a virtual time counter based on a scale factor that corresponds to a ratio of a first frequency of a first platform to a second frequency of a second platform that includes the processor. The processor is to execute guest software that is migrated from the first platform to the second platform using the value of the virtual time counter obtained by the logic. Other embodiments are described and claimed. | 10-23-2014 |
20140317634 | INFORMATION PROCESSING DEVICE - In a case where a procedure in which a process generation step of allocating a resource necessary for application execution is carried out and then an application execution screen is displayed in a display panel is an application execution procedure, an application execution processing section ( | 10-23-2014 |
20140325519 | VARIABLE WAIT TIME IN AN ASYNCHRONOUS CALL-BACK SYSTEM - A method includes a workload management (WLM) server that receives a first CHECK WORKLOAD command for a workload in a queue of the WLM server. It may be determined whether the workload is ready to run on a WLM client. If the workload is not ready to run, a wait time for the workload with the WLM server is dynamically estimated. The wait time is sent to the WLM client. If the workload is ready to run, then a response is sent to the WLM client that workload is ready to run. | 10-30-2014 |
20140331233 | TASK DISTRIBUTION METHOD AND SYSTEM - Systems and methods for task distribution are provided. A total number of available computing system's processing units is defined, where the total number of available processing units includes a set of regular processing units available for executing tasks and a set of processing units that constitute the reserve pool. Tasks are assigned to processing units. The number of processing units assigned to the next task in the queue is no more than the total number of processing units available at the time, multiplied by the availability ratio. Iterative assignment of processing units to tasks according to the method described is performed as long as there are idle processing units available for task execution, when no more processing units are available, the processing units from the reserve pool are assigned. As a result, the method allows processing units to be available for allocation to a new incoming task at any time. | 11-06-2014 |
20140331234 | Task-Based Performance Resource Management of Computer Systems - Execution of a plurality of tasks by a processor system are monitored. Based on this monitoring, tasks requiring adjustment of performance resources are identified by calculating at least one of a progress error or a progress limit error for each task. Thereafter, performance resources of the processor system allocated to each identified task are adjusted. Such adjustment can comprise: adjusting a clock rate of at least one processor in the processor system executing the task, adjusting an amount of cache and/or buffers to be utilized by the task, and/or adjusting an amount of input/output (I/O) bandwidth to be utilized by the task. Related systems, apparatus, methods and articles are also described. | 11-06-2014 |
20140337851 | Migrating Processes Operating On One Platform To Another Platform In A Multi-Platform System - Embodiments of the claimed subject matter are directed to methods and a system that allows the optimization of processes operating on a multi-platform system (such as a mainframe) by migrating certain processes operating on one platform to another platform in the system. In one embodiment, optimization is performed by evaluating the processes executing in a partition operating under a proprietary operating system, determining a collection of processes from the processes to be migrated, calculating a cost of migration for migrating the collection of processes, prioritizing the collection of processes in an order of migration and incrementally migrating the processes according to the order of migration to another partition in the mainframe executing a lower cost (e.g., open-source) operating system. | 11-13-2014 |
20140337852 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION STORAGE MEDIUM - In response to a selection of a program, a board image display control section sets program related information associated with the selected program in a displayable state. An execution start managing section starts the program in response to reception of a request to start the program, the program related information associated with the program being set in the displayable state. A stop and end managing section ends an already started program when a given condition is satisfied at a time of starting the program by the execution start managing section. A setting of program related information associated with the ended program is maintained in a displayable state even after the program is ended by the stop and end managing section. | 11-13-2014 |
20140344819 | SYSTEM AND METHOD FOR SELECTIVE TIMER COALESCING - A method and apparatus of a device that coalesces the execution of several timers by scheduling the timers using a scheduling window is described. The device determines a scheduling window for each of several timers. The device selects a coalesced execution time that is within the scheduling window of the timers. The device coalesces the execution of the timers by scheduling the timers to execute at the coalesced execution time. The device can further coalesce multiple timers by opportunistic execution of the timers. In response to a detection of an opportunistic execution trigger event, the device receives multiple timers. The device selects a subset of the timers to execute based on an initial execution time and a latency time for each of the timers. The device schedules each of the subset of timers to execute during or before the opportunistic execution trigger event. | 11-20-2014 |
20140344820 | SYSTEM AND METHOD FOR SELECTIVE TIMER RATE LIMITING - A method and apparatus of a device that rate-limits the execution of a timer is described. The device receives a timer that includes an initial execution timer and a timer priority. If the timer priority is low, the device rate-limits the execution of the timer based on a suppression period associated with the timer priority. In order to rate-limit the execution of the timer, the device determines the suppression period based on the timer priority and schedules the timer to execute at the end of the suppression period. The device further schedules the timer to execute at the initial exertion time when the timer priority is high. | 11-20-2014 |
20140344821 | TECHNIQUES FOR SHARING PRIORITIES BETWEEN STREAMS OF WORK AND DYNAMIC PARALLELISM - One embodiment sets forth a method for assigning priorities to kernels launched by a software application and executed within a stream of work on a parallel processing subsystem that supports dynamic parallelism. First, the software application assigns a maximum nesting depth for dynamic parallelism. The software application then assigns a stream priority to a stream. These assignments cause a driver to map the stream priority to a device priority and, subsequently, associate the device priority with the stream. As part of the mapping, the driver ensures that each device priority is at least the maximum nesting depth higher than the device priorities associated with any lower priority streams. Subsequently, the driver launches any kernel included in the stream with the device priority associated with the stream. Advantageously, by strategically assigning the maximum nesting depth and prioritizing streams, an application developer may increase the overall processing efficiency of the software application. | 11-20-2014 |
20140344822 | TECHNIQUES FOR ASSIGNING PRIORITIES TO STREAMS OF WORK - One embodiment sets forth a method for assigning priorities to kernels launched by a software application and executed within a stream of work on a parallel processing subsystem. First, the software application assigns a desired priority to a stream using a call included in the API. The API receives this call and passes it to a driver. The driver maps the desired priority to an appropriate device priority associated with the parallel processing subsystem. Subsequently, if the software application launches a particular kernel within the stream, then the driver assigns the device priority associated with the stream to the kernel before adding the kernel to the stream for execution on the parallel processing subsystem. Advantageously, by assigning priorities to streams and, subsequently, strategically launching kernels within the prioritized streams, an application developer may fine-tune the software application to increase the overall processing efficiency of the software application. | 11-20-2014 |
20140344823 | INTERRUPTION OF CHIP COMPONENT MANAGING TASKS - Embodiments include an apparatus comprising a processor and a computer readable storage medium having computer usable program code. The computer usable program code can be configured to determine whether priority of a requested task is higher than a priority of a currently executing task. The computer usable program code can be further configured to determine whether a value indicates that the currently executing task can be interrupted. The computer usable program code can be configured to trigger execution of the requested task on the processor, if the value indicates that the currently executed task can be interrupted. The computer usable program code can be further configured to wait for lapse of a time period and, interrupt the currently executing task upon detection of lapse of the time period or detection of a change to the value, if the value indicates that the currently executing task cannot be interrupted. | 11-20-2014 |
20140344824 | INTERRUPTION OF CHIP COMPONENT MANAGING TASKS - Embodiments include receiving, at a microcontroller of a chip, a request to execute a first task having a first priority. Embodiments further include determining that a second task having a second priority is currently executing. Embodiments further include determining that the first priority is higher than the second priority. Embodiments further include determining whether a value in a register indicates that the second task can be interrupted. If it is determined that the second task can be interrupted, embodiments further include triggering execution of the second task. If it is determined that the second task cannot be interrupted, embodiments further include waiting for lapse of a time period since receipt of the request to execute the first task, and interrupting the second task upon detecting lapse of the time period, or detecting, prior to the lapse of the time period, that the second task can be interrupted. | 11-20-2014 |
20140344825 | TASK ALLOCATION OPTIMIZING SYSTEM, TASK ALLOCATION OPTIMIZING METHOD AND TASK ALLOCATION OPTIMIZING PROGRAM - Provided is a task allocation optimizing system that, for a development target system which has a plurality of states and which is provided with multi-cores, makes an allocation of tasks to the cores such that a performance of the target system does not significantly degrade in a specific one of the states. | 11-20-2014 |
20140351819 | MULTIPROCESSOR SCHEDULING POLICY - A method of determining a multi-agent schedule includes defining a well-formed, non-preemptive task set that includes a plurality of tasks, with each task having at least one subtask. Each subtask is associated with at least one resource required for performing that subtask. In accordance with the method, an allocation, which assigns each task in the task set to an agent, is received and a determination is made, based on the task set and the allocation, as to whether a subtask in the task set is schedulable at a specific time. A system for implementing the method is also provided. | 11-27-2014 |
20140351820 | APPARATUS AND METHOD FOR MANAGING STREAM PROCESSING TASKS - An apparatus and method for managing stream processing tasks are disclosed. The apparatus includes a task management unit and a task execution unit. The task management unit controls and manages the execution of assigned tasks. The task execution unit executes the tasks in response to a request from the task management unit, collects a memory load state and task execution frequency characteristics based on the execution of the tasks, detects low-frequency tasks based on the execution frequency characteristics if it is determined that a shortage of memory has occurred based on the memory load state, assigns rearrangement priorities to the low-frequency tasks, and rearranges the tasks based on the assigned rearrangement priorities. | 11-27-2014 |
20140359632 | EFFICIENT PRIORITY-AWARE THREAD SCHEDULING - A priority-based scheduling and execution of threads may enable the completion of higher-priority tasks above lower-priority tasks. Occasionally, a high-priority thread may request a resource that has already been reserved by a lower-priority thread, and the higher-priority thread may be blocked until the lower-priority thread relinquishes the reservation. Such prioritization may be acceptable if the lower-priority thread is able to execute comparatively unimpeded, but in some scenarios, the lower-priority thread may execute at a lower priority than a third thread that also has a lower priority than the high-priority thread. In this scenario, the third thread is effectively but incorrectly prioritized above the high-priority thread. Instead, upon detecting this scenario, the device may temporarily elevate the priority of the lower-priority thread over the priority of the third thread until the lower-priority thread relinquishes the resource, thereby reducing the waiting period of the high-priority thread for the requested resource. | 12-04-2014 |
20140366032 | Prioritising Event Processing Based on System Workload - Event processing is prioritized based on system workload. A time constraint attribute is defined in an event rule. The event rule uses one or more events. An event processing system is monitored to determine when the system is under a predefined level of stress. If the system is determined to be under the predefined level of stress, the time constraint attribute in the event rule is used to establish when the processing of a received event used in an event rule must be carried out. | 12-11-2014 |
20140373021 | Assigning and Scheduling Threads for Multiple Prioritized Queues - An operating system provides a pool of worker threads servicing multiple queues of requests at different priority levels. A concurrency controller limits the number of currently executing threads. The system tracks the number of currently executing threads above each priority level, and preempts operations of lower priority worker threads in favor of higher priority worker threads. A system can have multiple pools of worker threads, with each pool having its own priority queues and concurrency controller. A thread also can change its priority mid-operation. If a thread becomes lower priority and is currently active, then steps are taken to ensure priority inversion does not occur. In particular, the current thread for the now lower priority item can be preempted by a thread for a higher priority item and the preempted item is placed in the lower priority queue. | 12-18-2014 |
20140373022 | METHOD AND APPARATUS FOR EFFICIENT SCHEDULING FOR ASYMMETRICAL EXECUTION UNITS - A method for performing instruction scheduling in an out-of-order microprocessor pipeline is disclosed. The method comprises selecting a first set of instructions to dispatch from a scheduler to an execution module, wherein the execution module comprises two types of execution units. The first type of execution unit executes both a first and a second type of instruction and the second type of execution unit executes only the second type. Next, the method comprises selecting a second set of instructions to dispatch, which is a subset of the first set and comprises only instructions of the second type. Next, the method comprises determining a third set of instructions, which comprises instructions not selected as part of the second set. Finally, the method comprises dispatching the second set for execution using the second type of execution unit and dispatching the third set for execution using the first type of execution unit. | 12-18-2014 |
20140373023 | EXCLUSIVE CONTROL REQUEST ALLOCATION METHOD AND SYSTEM - A management server specifies processes that make exclusive control requests of files in a predetermined time slot, based on an execution schedule of a plurality of processes. Then, the management server specifies files that are the subjects of exclusive control in the predetermined time slot, based on utilization file information indicating files that are used by the respective processes. Then, the management server determines a plurality of file management servers as destinations of exclusive control requests of the respective specified files such that the number of exclusive control requests to be transmitted in the predetermined time slot to each of the file management servers, which is configured to perform exclusive control of a file, is not greater than a predetermined number of exclusive control requests. | 12-18-2014 |
20140380327 | DEVICE AND METHOD FOR SYNCHRONIZING TASKS EXECUTED IN PARALLEL ON A PLATFORM COMPRISING SEVERAL CALCULATION UNITS - A device and method for synchronizing tasks executed in parallel on a platform comprising comprises several computation units. The tasks are apt to be preempted by the operating system of the platform, and the device comprises at least one register and one recording module installed in the form of circuits on said platform, said recording module being suitable for storing a relationship between a condition to be satisfied regarding the value recorded by one of said registers and one or more computation tasks, the device comprising a dynamic allocation module installed in the form of circuits on the platform and configured to choose a computation unit from among computation units of the platform when said condition is fulfilled, and for launching the execution on the chosen computation unit of a software function for searching for the tasks on standby awaiting the fulfillment of the condition and notifications of said tasks. | 12-25-2014 |
20140380328 | SOFTWARE MANAGEMENT SYSTEM AND COMPUTER SYSTEM - A computer system includes: a physical computer including plural physical processors, a peripheral device connected to the plural physical processors, and a memory connected to the plural physical processors; and a management computer connected to the physical computer. The physical computer includes plural physical processor environments on each of which a virtual computer can be built, and the management computer includes an environment table indicating correspondence between plural physical processor environments each of which has the physical processor and on each of which a virtual computer can be built and an executable software program in each of the physical processor environments. When a specific software program is executed in the physical computer, a physical processor environment corresponding to a software program to be executed is selected from the plural physical processor environments by the environment table, and a virtual computer is built on the selected physical processor environment. | 12-25-2014 |
20150026692 | SYSTEMS AND METHODS FOR QUERY QUEUE OPTIMIZATION - A computer-implemented method for optimizing a queue of queries for database efficiency is implemented by a controller computing device coupled to a memory device. The method includes receiving a plurality of database queries at the computing device from at least one host, evaluating the plurality of database queries to determine a resource impact associated with each database query of the plurality of database queries, prioritizing the plurality of database queries based upon a set of prioritization factors and the resource impact associated with each database query, and submitting the prioritized plurality of database queries to a database system for execution. The database system executes the plurality of database queries in order of priority. | 01-22-2015 |
20150026693 | INFORMATION PROCESSING APPARATUS AND JOB SCHEDULING METHOD - An information processing apparatus includes a storage unit and a processing unit. The storage unit is configured to store therein a first execution time and a second execution time longer than the first execution time. The first execution time is a time expected to be taken to execute a first job included in a first job group. The processing unit is configured to determine in which time period an execution start time of the first job is included. The execution start time is a time at which execution of the first job is to be started. The processing unit is configured to select, as a predicted execution time of the first job, one of the first execution time and the second execution time based on a result of the determination. The processing unit is configured to perform scheduling of the first job group based on the predicted execution time. | 01-22-2015 |
20150026694 | METHOD OF PROCESSING INFORMATION, STORAGE MEDIUM, AND INFORMATION PROCESSING APPARATUS - A method of processing information includes receiving a notification indicating completion of a garbage collection processing; dividing a time period of the garbage collection processing into a plurality of intervals; calculating, for each of the plurality of intervals, an interval fill-rate by calculating a sum total of a processing time allocated to each of one or more threads, calculating a quotient by dividing the sum total by a smaller one of a number of threads and a number of cores, and dividing the quotient by an execution time; calculating an entire fill-rate by dividing, by an execution time of the entire garbage collection processing, a sum total of a product of the interval fill-rate and the execution time of the interval; and lowering a priority of a second process than a priority of the first process, when the entire fill-rate is equal to or less than a predetermined value. | 01-22-2015 |
20150026695 | SYSTEM AND METHOD TO CONTROL HEAT DISSIPATION THROUGH SERVICE LEVEL ANALYSIS - The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device. | 01-22-2015 |
20150040133 | MULTIPLE STAGE WORKLOAD MANAGEMENT SYSTEM - Provided are techniques for multiple stage workload management. A staging queue and a run queue are provided. A workload is received. In response to determining that application resources are not available and that the workload has not been previously semi-started, the workload is added to the staging queue. In response to determining that the application resources are not available and that the workload has been semi-started, and, in response to determining that run resources are available, the workload is started. In response to determining that the application resources are not available and that the workload has been semi-started, and, in response to determining that the run resources are not available, adding the workload to the run queue. | 02-05-2015 |
20150040134 | DISTRIBUTED STORAGE NETWORK WITH COORDINATED PARTIAL TASK EXECUTION AND METHODS FOR USE THEREWITH - A method includes receiving a task for execution by a plurality of distributed storage and task execution units A priority level is determined for the task. A plurality of coordinated partial task requests are generated and sent to the plurality of distributed storage and task execution units, wherein the plurality coordinated partial task requests indicate a plurality of coordinated partial tasks and the priority level. A plurality of partial task results are received in response to performance of the plurality of coordinated partial tasks by the plurality of distributed storage and task execution units. A task result for the task is generated based on the plurality of partial task results. | 02-05-2015 |
20150058857 | Concurrent Program Execution Optimization - An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing. | 02-26-2015 |
20150058858 | DYNAMIC TASK PRIORITIZATION FOR IN-MEMORY DATABASES - The present invention provides methods and system, including computer program products, implementing and using techniques for providing tasks of different classes with access to CPU time provided by worker threads of a database system. In particular, the invention relates to such a database-system-implemented method comprising the following steps: inserting the tasks to a queue of the database system; and executing the tasks inserted to the queue by worker threads of the database system according to their order in the queue; characterized in that the queue is a priority queue; and in that the method further comprises the following steps: assigning each class to a respective priority; and in that the step of inserting the tasks to the queue includes: associating each task with the respective priority assigned to its class. | 02-26-2015 |
20150067691 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR PRIORITIZED ACCESS FOR MULTITHREADED PROCESSING - A system, method, and computer program product are provided for providing prioritized access for multithreaded processing. The method includes the steps of allocating threads to process a workload and assigning a set of priority tokens to at least a portion of the threads. Access to a resource, by each one of the threads, is based on the priority token assigned to the thread and the threads are executed by a multithreaded processor to process the workload. | 03-05-2015 |
20150067692 | Thermal Prioritized Computing Application Scheduling - Implementations disclosed herein relate to thermal based prioritized computing application scheduling. For example, a processor may determine a prioritized computing application. The processor may schedule the prioritized computing application to transfer execution from a first processing unit to a second processing unit based on a thermal reserve energy associated with the second processing unit. | 03-05-2015 |
20150067693 | INFORMATION PROCESSING SYSTEM, JOB MANAGEMENT APPARATUS, RECORDING MEDIUM, AND METHOD - A job management apparatus includes a storage device configured to store a maximum power value when one or more calculation nodes has executed a first job, and a controller configured to detect the first job, at least one of which an identification information is matched with the identification information of a second job to be scheduled, of which the number of the calculation nodes to use is matched with the number of the calculation nodes of the second job, and of which a difference of the number of the calculation nodes with the second job is within a prescribed range, predict, as a second maximum power value of the second job, a first maximum power value of the detected first job, and schedule the second job such that the second maximum power value of the second job does not exceed a power consumption limit value set according to a time. | 03-05-2015 |
20150074670 | METHOD AND SYSTEM FOR DISTRIBUTED PROCESSING OF HTTP REQUESTS - The current document is directed to an interface and authorization service that allows users of a cloud-director management subsystem of distributed, multi-tenant, virtual data centers to extend the services and functionalities provided by the cloud-director management subsystem. A cloud application programming interface (“API”) entrypoint represents a request/response RESTful interface to services and functionalities provided by the cloud-director management subsystem as well as to service extensions provided by users. The API entrypoint includes a service-extension interface and an authorization-service management interface. The cloud-director management subsystem provides the authorization service to service extensions that allow the service extensions to obtain, from the authorization service, an indication of whether or not a request directed to the service extension through the API entrypoint is authorized. Requests for service-extension URIs within the API entrypoint are processed by a cloud API entrypoint server that dispatches requests, in a predetermined order, to service-extension servers. | 03-12-2015 |
20150074671 | ANTICIPATORY WARM-UP OF CLUSTER RESOURCES FOR JOBS PROCESSED ON MULTIPLE CLUSTER NODES - Systems and methods are disclosed for reducing latency in processing data sets in a distributed fashion. A job-queue operable for queuing data-processing jobs run on multiple nodes in a cluster may be communicatively coupled to a job analyzer. The job analyzer may be operable to read the data-processing jobs and extract information characterizing those jobs in ways that facilitate identification of resources in the cluster serviceable to run the data-processing jobs and/or data to be processed during the running of those jobs. The job analyzer may also be coupled to a resource warmer operable to warm-up a portion of the cluster to be used to run a particular data-processing job prior to the running of the job. In some embodiments, mappers and/or reducers may be extracted from the jobs and converted into compute node identifiers and/or data units identifying blocks for processing, informing the warm-up operations of the resource warmer. | 03-12-2015 |
20150074672 | ASYNCHRONOUS SCHEDULING INFORMED BY JOB CHARACTERISTICS AND ANTICIPATORY PROVISIONING OF DATA FOR REAL-TIME, PARALLEL PROCESSING - Systems and methods are disclosed for scheduling jobs processed in a distributed fashion to realize unharnessed efficiencies latent in the characteristics of the jobs and distributed processing technologies. A job store may be communicatively coupled to a job analyzer. The job analyzer may be operable to read information characterizing a job to identify multiple data blocks to be processed during the job at multiple locations in a cluster of nodes. A scheduling module may use information about the multiple data blocks, their storage locations, their status with respect to being provisioned to processing logic, data blocks to be processed by other jobs, data blocks in cache that have been pre-fetched for a prior job, quality-of-services parameters, and/or job characteristics, such as job size, to schedule the job in relation to other jobs. | 03-12-2015 |
20150074673 | CONTROL APPARATUS, CONTROL SYSTEM AND CONTROL METHOD - According to certain embodiment, there is provided a control apparatus including a processor. The processor controls a first processing unit. The processor acquires determination information for estimating a time delay till execution of a first process is started, in response to receiving an interrupt request for the first process related to the first processing unit from a program being executed underway by the processor or from hardware connected via a bus to the processor, and determines whether to execute the first process or not based on the determination information. | 03-12-2015 |
20150074674 | APPARATUS AND METHOD FOR ADJUSTING PRIORITIES OF TASKS - An apparatus for adjusting priorities of tasks determines a task violating a real-time constraint using a profiling result of the real-time software and task details including a real-time constraint for each task, adjusts a priority of the task violating the real-time constraint or a higher candidate task close to the task violating the real-time constraint, and simulates execution of the real-time software depending on the adjusted priority. | 03-12-2015 |
20150074675 | METHOD AND SYSTEM FOR INSTRUCTION SCHEDULING - Aspects of the disclosure provide a method for instruction scheduling. The method includes receiving a sequence of instructions, identifying redundant flag-register based dependency of the instructions, and re-ordering the instructions without being restricted by the redundant flag-register based dependency. | 03-12-2015 |
20150074676 | TASK PROCESSNG DEVICE - A plurality of tasks are processed simultaneously in a plurality of CPUs. A task control circuit is connected to the plurality of CPUs, and when executing a system call signal instruction, each CPU transmits a system call signal to the task control circuit. Upon receipt of a system call signal from a CPU 0, the task control circuit | 03-12-2015 |
20150082316 | System and Method for Efficient Utilization of Simulation Resources - The present invention relates to automation of flow assurance simulation workflow using a web-based batch simulation scheduler. Further, the present invention relates to a system for efficient utilization of simulation resources comprising a database server; a simulator; a batch simulation scheduler; a license monitor; a launcher; a wrapper; and a debugger. The batch simulation scheduler schedules simulations on computing resources with specific input files based on user submitted information in database according to user-specified priorities. The license monitor updates the database with the number of available licenses for each feature or module. The launcher running on computing resources monitors the database for instructions from the scheduler to launch the simulations. The debugger parses the input file (s) and verifies its syntax without using a simulation license. The system utilizes all available licenses to execute the jobs thereby resulting in parallel execution of cases in the jobs. | 03-19-2015 |
20150089509 | DATA PROCESSING RESOURCE MANAGEMENT - In accordance with one aspect of the present description execution of a particular command by a data processor such as a storage controller, may include obtaining priority over a resource which is also associated with execution of another command, setting a timer for the duration of a dynamically set timeout period, and detecting a potential deadlock condition as a function of expiration of the dynamically set timeout period before execution of the particular command is completed. In one embodiment, the particular command releases priority over the resource upon detection of the potential deadlock condition, and then reobtains priority over the resource in a retry of the command. It is believed that such an arrangement can relieve a potential deadlock condition, allowing execution of one or more commands including the particular command to proceed. Other features and aspects may be realized, depending upon the particular application. | 03-26-2015 |
20150089510 | DEVICE, SYSTEM, APPARATUS, METHOD AND PROGRAM PRODUCT FOR SCHEDULING - A scheduling device according to embodiment may comprise a controller, a load calculator, a resource calculator. The controller may be configured to obtain an execution history of one or more tasks operating on a virtual OS. The load calculator may be configured to calculate a first resource amount required by each task based on the execution history. The resource calculator may be configured to calculate a second resource to be assigned to the virtual OS based on the first resource amount calculated for the one or more tasks. | 03-26-2015 |
20150100965 | Method and Apparatus for Dynamic Resource Partition in Simultaneous Multi-Thread Microprocessor - A method includes, in one implementation, receiving a first set of instructions of a first thread, receiving a second set of instructions of a second thread, and allocating queues to the instructions from the first and second sets. During a time when the first and second threads are simultaneously being processed, changeable number of queues to can be allocated to the first thread based on factors such as the first and/or second thread's requirements or priorities, while maintaining a minimum specified number of queues that are allocated to the first and/or second thread. When needed, one thread may be stalled so that at least the minimum number of queues remains reserved for another thread while attempting to satisfy thread-priority requests or queue-requirement requests. | 04-09-2015 |
20150100966 | ADJUSTING EXECUTION OF TASKS IN A DISPERSED STORAGE NETWORK - A method includes a set of execution units of a dispersed storage network (DSN) receiving sets of sub-task requests from a computing device and storing the sets of sub-task requests, where each execution unit stores a request of each of the sets of sub-task requests to produce a corresponding plurality of sub-task requests. The method continues with each execution unit generating sub-task estimation data and adjusting timing, sequencing, or processing of the corresponding plurality of sub-task requests based on the estimation data to produce a plurality of partial results, where, due to one or more difference factors from a list of difference factors, the execution units process pluralities of sub-task requests at difference paces, where the list of difference factors includes differences in amounts of data to be processed per sub-task request, processing capabilities, memory storage capabilities, and networking capabilities. | 04-09-2015 |
20150100967 | RESOLVING DEPLOYMENT CONFLICTS IN HETEROGENEOUS ENVIRONMENTS - Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is resolved between the first application and at least a second application. | 04-09-2015 |
20150106819 | TASK SCHEDULING METHOD FOR PRIORITY-BASED REAL-TIME OPERATING SYSTEM IN MULTICORE ENVIRONMENT - Disclosed herein is a task scheduling method for a priority-based real-time operating system in a multicore environment, which solves problems occurring in real-time multicore task scheduling which employs a conventional decentralized scheme. In the task scheduling method, one or more scheduling algorithm candidates for sequential tasks are combined with one or more scheduling algorithm candidates for parallel tasks. Respective task scheduling algorithm candidates generated at combining, are simulated and performances of the candidates are evaluated based on performance evaluation criteria. A task scheduling algorithm exhibiting best performance is selected from among results obtained at evaluating the performances. | 04-16-2015 |
20150113537 | MANAGING CONTINUOUS PRIORITY WORKLOAD AVAILABILITY AND GENERAL WORKLOAD AVAILABILITY BETWEEN SITES AT UNLIMITED DISTANCES FOR PRODUCTS AND SERVICES - A system for providing reliable availability of a general workload and continuous availability of a priority workload over long distances may include a first computing site configured to execute a first instance associated with the priority workload, wherein the first instance is designated as an active instance, a second computing site configured to execute a second instance of the priority workload, wherein the second instance is designated as a standby instance, a third computing site configured to restart a third instance associated with the general workload, and a workload availability module configured to synchronize a portion of data associated with the third instance with a corresponding portion of data associated with the second instance. | 04-23-2015 |
20150121386 | INDICATING STATUS, DUE DATE, AND URGENCY FOR A LISTED TASK BY A TASK TRACKING CONTROL - Disclosed herein are representative embodiments of tools and techniques for displaying one or more task tracking controls to indicate respective progress statuses, urgency stages, and due dates for respective of one or more listed tasks of a procedure. According to one exemplary technique, a task list is received by a software. The task list including a listed task. In addition, a determination is made that the listed task is in an incomplete status. Also, an urgency stage is determined for the listed task. The listed task is associated with a due date for the listed task. Based on the determination that the listed task is in an incomplete status, a task tracking control is output for display. The task tracking control indicating the incomplete status of the listed task includes a visual indication of the due date and a visual indication of the urgency stage for the listed task. | 04-30-2015 |
20150121387 | TASK SCHEDULING METHOD FOR DISPATCHING TASKS BASED ON COMPUTING POWER OF DIFFERENT PROCESSOR CORES IN HETEROGENEOUS MULTI-CORE SYSTEM AND RELATED NON-TRANSITORY COMPUTER READABLE MEDIUM - A task scheduling method is applied to a heterogeneous multi-core system. The heterogeneous multi-core system has at least one first processor core and at least one second processor core. The task scheduling method includes: referring to task priorities of tasks of the heterogeneous processor cores to identify at least one first task of the tasks that belongs to a first priority task group, wherein each first task belonging to the first priority task group has a task priority not lower than task priorities of other tasks not belonging to the first priority task group; and dispatching at least one of the at least one first task to at least one run queue of at least one of the at least one first processor core. | 04-30-2015 |
20150121388 | TASK SCHEDULING METHOD FOR DISPATCHING TASKS BASED ON COMPUTING POWER OF DIFFERENT PROCESSOR CORES IN HETEROGENEOUS MULTI-CORE PROCESSOR SYSTEM AND RELATED NON-TRANSITORY COMPUTER READABLE MEDIUM - A task scheduling method is applied to a heterogeneous multi-core processor system. The heterogeneous multi-core processor system has at least one first processor core and at least one second processor core. The task scheduling method includes: referring to task priorities of tasks of the heterogeneous processor cores to identify at least one first task of the tasks that belongs to a first priority task group, wherein each first task belonging to the first priority task group has a task priority not lower than task priorities of other tasks not belonging to the first priority task group; and dispatching at least one of the at least one first task to at least one run queue of at least one of the at least one first processor core. | 04-30-2015 |
20150121389 | PROCESSING TECHNIQUES FOR SERVERS HANDLING CLIENT/SERVER TRAFFIC AND COMMUNICATIONS - The present invention relates to a system for handling client/server traffic and communications pertaining to the delivery of hypertext information to a client. The system includes a central server which processes a request for a web page from a client. The central server is in communication with a number of processing/storage entities, such as an annotation means, a cache, and a number of servers which provide identification information. The system operates by receiving a request for a web page from a client. The cache is then examined to determine whether information for the requested web page is available. If such information is available, it is forwarded promptly to the client for display. Otherwise, the central server retrieves the relevant information for the requested web page from the pertinent server. The relevant information is then processed by the annotation means to generate additional relevant computer information that can be incorporated to create an annotated version of the requested web page which includes additional displayable hypertext information. The central server then relays the additional relevant computer information to the client so as to allow the annotated version of the requested web page to be displayed. In addition, the central server can update the cache with information from the annotated version. The central server can also interact with different servers to collect and maintain statistical usage information. In handling its communications with various processing/storage entities, the operating system running behind the central server utilizes a pool of persistent threads and an independent task queue to improve the efficiency of the central server. A task needs to have a thread assigned to it before the task can be executed. The pool of threads are continually maintained and monitored by the operating system. Whenever a thread is available, the operating system identifies the next executable task in the task queue and assigns the available thread to such task so as to allow it to be executed. Upon conclusion of the task execution, the assigned thread is released back into the thread pool. An additional I/O queue for specifically handling input/output tasks can also be used to further improve the efficiency of the central server. | 04-30-2015 |
20150135183 | METHOD AND SYSTEM OF A HIERARCHICAL TASK SCHEDULER FOR A MULTI-THREAD SYSTEM - A method for scheduling tasks from a program executed by a multi-processor core system is disclosed. The method includes a scheduler that groups a plurality of tasks, each having an assigned priority, by priority in a task group. The task group is assembled with other task groups having identical priorities in a task group queue. A hierarchy of task group queues is established based on priority levels of the assigned tasks. Task groups are assigned to one of a plurality of worker threads based on the hierarchy of task group queues. Each of the worker threads is associated with a processor in the multi-processor system. The tasks of the task groups are executed via the worker threads according to the order in the hierarchy. | 05-14-2015 |
20150135184 | TIME AND SPACE-DETERMINISTIC TASK SCHEDULING APPARATUS AND METHOD USING MULTI-DIMENSIONAL SCHEME - A time and space-deterministic task scheduling apparatus and method using a multi-dimensional scheme are disclosed. The time and space-deterministic task scheduling apparatus includes a preparation list generation unit and a task insertion unit. The preparation list generation unit generates a preparation list, including a preparation table having an array structure configured to have each bit formed of a binary number indicative of a priority of a task, and also including a preparation group cluster configured to include a plurality of preparation groups, each including bits corresponding to the respective binary numbers of the preparation table, and to have an upper and lower dimension relationship between the plurality of preparation groups. The task insertion unit performs bit masking on the preparation group cluster and the preparation table corresponding to a task P having a specific priority and thus inserts the task into the preparation group cluster and the preparation table. | 05-14-2015 |
20150135185 | DYNAMIC SCALING OF A CLUSTER OF COMPUTING NODES - Techniques are described for managing distributed execution of programs, including by dynamically scaling a cluster of multiple computing nodes performing ongoing distributed execution of a program, such as to increase and/or decrease computing node quantity. An architecture may be used that has core nodes that each participate in a distributed storage system for the distributed program execution, and that has one or more other auxiliary nodes that do not participate in the distributed storage system. Furthermore, as part of performing the dynamic scaling of a cluster, computing nodes that are only temporarily available may be selected and used, such as computing nodes that might be removed from the cluster during the ongoing program execution to be put to other uses and that may also be available for a different fee (e.g., a lower fee) than other computing nodes that are available throughout the ongoing use of the cluster. | 05-14-2015 |
20150143378 | MULTI-THREAD PROCESSING APPARATUS AND METHOD FOR SEQUENTIALLY PROCESSING THREADS - Provided are a multi-thread processing apparatus and method for sequentially processing threads. The multi-thread processing method includes scheduling, at a processor, one of a plurality of thread groups allocated by a job distributor, determining whether the thread group has been initialized based on an examination an uninitialized flag of the scheduled thread group, generating a thread group descriptor for the scheduled thread group and initializing the thread group based on the determination of whether the thread group has been initialized, and initializing a thread descriptor based on a determination of whether initialization is needed and sequentially executing each thread in the scheduled thread group. | 05-21-2015 |
20150143379 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, RECORDING MEDIUM AND INFORMATION PROCESSING SYSTEM - There is provided an information processing apparatus including a receiver configured to receive a request to perform processing related to a task, from a first information processing apparatus which functions as a client on a network; a scheduler configured to, when a rank of a priority of the scheduler of the information processing apparatus among information processing apparatuses on the network is a first predetermined rank or higher, assign the task to one or a plurality of second information processing apparatuses which function as nodes on the network; and a transmitter configured to transmit a request to execute processing related to the task assigned to the one or the plurality of second information processing apparatuses. | 05-21-2015 |
20150150015 | ELIMINATING EXECUTION OF JOBS-BASED OPERATIONAL COSTS OF RELATED REPORTS - Optimizing operational costs in a computing environment includes identifying high-cost jobs that are executed to generate one or more reports in the computing environment, identifying one or more reports the generation of which is dependent on the execution of the high-cost jobs, and culling at least a first job from among the high-cost jobs, in response to determining that a benefit achieved from the reports that depend on the first job does not justify costs associated with generating the reports. | 05-28-2015 |
20150150016 | METHOD AND APPARATUS FOR A USER-DRIVEN PRIORITY BASED JOB SCHEDULING IN A DATA PROCESSING PLATFORM - A method, non-transitory computer readable medium, and apparatus for configuring a scheduling a job request in a data processing platform are disclosed. The method receives a new job request having a priority selected by a user, submits the new job request to an online job queue comprising a plurality of jobs, wherein each one of the plurality of jobs comprises a respective priority selected by a respective user and schedules the new job request and the plurality of jobs in the online job queue to one or more available worker nodes in a unit time slot based upon a comparison of the priority of the new job and the respective priority of the plurality of jobs in the online job queue, wherein the scheduling algorithm is based on one of: blocks having a variable size and a static processing time or blocks having a static size and a variable processing time. | 05-28-2015 |
20150150017 | OPTIMIZATION OF MAP-REDUCE SHUFFLE PERFORMANCE THROUGH SHUFFLER I/O PIPELINE ACTIONS AND PLANNING - A shuffler receives information associated with partition segments of map task outputs and a pipeline policy for a job running on a computing device. The shuffler transmits to an operating system of the computing device a request to lock partition segments of the map task outputs and transmits an advisement to keep or load partition segments of map task outputs in the memory of the computing device. The shuffler creates a pipeline based on the pipeline policy, wherein the pipeline includes partition segments locked in the memory and partition segments advised to keep or load in the memory, of the computing device for the job, and the shuffler selects the partition segments locked in the memory, followed by partition segments advised to keep or load in the memory, as a preferential order of partition segments to shuffle. | 05-28-2015 |
20150150018 | OPTIMIZATION OF MAP-REDUCE SHUFFLE PERFORMANCE THROUGH SHUFFLER I/O PIPELINE ACTIONS AND PLANNING - A shuffler receives information associated with partition segments of map task outputs and a pipeline policy for a job running on a computing device. The shuffler transmits to an operating system of the computing device a request to lock partition segments of the map task outputs and transmits an advisement to keep or load partition segments of map task outputs in the memory of the computing device. The shuffler creates a pipeline based on the pipeline policy, wherein the pipeline includes partition segments locked in the memory and partition segments advised to keep or load in the memory, of the computing device for the job, and the shuffler selects the partition segments locked in the memory, followed by partition segments advised to keep or load in the memory, as a preferential order of partition segments to shuffle. | 05-28-2015 |
20150293783 | SCHEDULING IDENTITY MANAGER RECONCILIATION TO EXECUTE AT AN OPTIMAL TIME - Provided are techniques for the scheduling of an identity Manager reconciliation at an optimal time. The techniques include partitioning a security identity management handling task into a First sub-task and a second sub-task; assigning to the first sub-task a first priority, based upon a first projected number of accounts affected by the first sub-task, a first attribute criteria, a first expected completion time, and a corresponding first scheduler index value, based upon the first priority; and assigning to the second sub-task a second priority, based upon a second projected number of accounts affected by the second sub-task, a second attribute criteria, a second expected completion time, and a second scheduler index value, based upon the second priority; and scheduling the first sub-task prior to the second sub-task in accordance with a prioritization algorithm in which a first weighted combination of the first priority and first expected completion time is greater than a second weighted combination of the second priority and the second expected completion time. | 10-15-2015 |
20150293793 | METHOD AND APPARATUS FOR PROVIDING A PREEMPTIVE TASK SCHEDULING SCHEME IN A REAL TIME OPERATING SYSTEM - Method and apparatuses are provided for providing preemptive task scheduling for a Real Time Operating System (RTOS). A two-level priority is assigned to each task that is created. The two-level priority includes a kernel priority and a user-defined priority. A priority bitmap corresponding to the kernel priority is created. A priority bit in the priority bitmap is enabled. The priority bit indicates a status of a respective task | 10-15-2015 |
20150301858 | MULTIPROCESSORS SYSTEMS AND PROCESSES SCHEDULING METHODS THEREOF - Scheduling methods for a multi-core processor system including multiple processors are provided. First, a process to be executed is chosen from a ready queue and analyzed to obtain a power consumption value of the process to be executed. Next, an idle processor is chosen from the processors and a total power consumption value of system through which the process to be executed is being executed in the idle processor is estimated to obtain a first prediction result based on the obtained power consumption value. It is then determined whether to execute the process to be executed in the idle processor according to the first predicted value and a predetermined upper limit value. In some embodiments, the scheduling method may further provide preemption scheduling such that the process with high priority can be preferentially executed and process can flexible switch among different processor core clusters. | 10-22-2015 |
20150301859 | METHOD FOR SELECTING ONE OF SEVERAL QUEUES - A method for selecting one of several queues and for extracting one or more data segments from a selected queue for transmitting with the aid of an output interface includes: selecting the output interface by a first scheduler; selecting a number of queues by a second scheduler; selecting one queue from the number of queues by a third scheduler; and sending one or more data segments from the selected queue to the output interface for transmission. | 10-22-2015 |
20150301866 | ANALYSIS METHOD, ANALYSIS APPARATUS AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN ANALYSIS PROGRAM - Relating to services each including a plurality of processes having a plurality of hierarchies, service information is stored in which processes for each service are grouped in a predetermined hierarchy taking presence or absence of a common hierarchy into consideration. Then, based on log data and the service information relating to a plurality of services, a first decision process for deciding presence or absence of an abnormality relating to a process included in one or more services is performed. Further, a second decision process is performed for developing, where a process decided as an abnormal process is a grouped grouping process, the grouping process decided as an abnormal process to one or more processes in a lower hierarchy than the predetermined hierarchy based on the service information and deciding presence or absence of an abnormality relating to the one or more developed processes. | 10-22-2015 |
20150309843 | RESOURCE OPTIMIZATION METHOD AND APPARATUS - The present disclosure discloses a resource optimization method and apparatus. The method includes: detecting whether a currently started process is a process of a predetermined type; querying for suspendable processes among other currently running processes if it is detected that the currently started process is a process of the predetermined type; and suspending at least one process among the found suspendable processes. | 10-29-2015 |
20150324226 | DATA STORAGE RESOURCE ALLOCATION USING LOGICAL HOLDING AREAS TO RESERVE RESOURCES FOR HIGHER PRIORITY STORAGE OPERATION REQUESTS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan. | 11-12-2015 |
20150324231 | OPPORTUNISTICALLY SCHEDULING AND ADJUSTING TIME SLICES - Computerized methods, computer systems, and computer-readable media for governing how virtual processors are scheduled to particular logical processors are provided. A scheduler is employed to balance a load imposed by virtual machines, each having a plurality of virtual processors, across various logical processors (comprising a physical machine) that are running threads in parallel. The threads are issued by the virtual processors and often cause spin waits that inefficiently consume capacity of the logical processors that are executing the threads. Upon detecting a spin-wait state of the logical processor(s), the scheduler will opportunistically grant time-slice extensions to virtual processors that are running a critical section of code, thus, mitigating performance loss on the front end. Also, the scheduler will mitigate performance loss on the back end by opportunistically de-scheduling then rescheduling a virtual machine in a spin-wait state to render the logical processor(s) available for other work in the interim. | 11-12-2015 |
20150331716 | USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES AND PRIORITIES ASSOCIATED WITH UNITS OF WORK AND SUB-UNITS OF THE UNIT OF WORK TO SELECT THE UNITS OF WORK AND THEIR SUB-UNITS TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values and priorities associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process. There are a plurality of work unit queues, wherein each of the work unit queues are associated with different work unit attribute values that are associated with units of work, wherein a plurality of the work unit queues include records for units of work to process having work unit attribute values associated with the work unit attribute values of the work unit queues, and wherein the work unit queues are each associated with a different priority. A record for a unit of work to perform is added to the work unit queue associated with a priority and work unit attribute value associated with the work unit. | 11-19-2015 |
20150339158 | Dynamic Co-Scheduling of Hardware Contexts for Parallel Runtime Systems on Shared Machines - Multi-core computers may implement a resource management layer between the operating system and resource-management-enabled parallel runtime systems. The resource management components and runtime systems may collectively implement dynamic co-scheduling of hardware contexts when executing multiple parallel applications, using a spatial scheduling policy that grants high priority to one application per hardware context and a temporal scheduling policy for re-allocating unused hardware contexts. The runtime systems may receive resources on a varying number of hardware contexts as demands of the applications change over time, and the resource management components may co-ordinate to leave one runnable software thread for each hardware context. Periodic check-in operations may be used to determine (at times convenient to the applications) when hardware contexts should be re-allocated. Over-subscription of worker threads may reduce load imbalances between applications. A co-ordination table may store per-hardware-context information about resource demands and allocations. | 11-26-2015 |
20150347177 | METHOD AND APPARATUS FOR INTER PROCESS PRIORITY DONATION - A method and an apparatus for priority donations among different processes are described. A first process running with a first priority may receive a request from a second process running with a second priority to perform a data processing task for the second process. A dependency relationship may be identified between the first process and a third process running with a third priority performing separate data processing task. The dependency relationship may indicate that the data processing task is to be performed via the first process subsequent to completion of the separate data processing task via the third process. The third process may be updated with the second priority to complete the separate data processing task. The first process may perform the data processing task with the second priority for the second process. | 12-03-2015 |
20150347178 | METHOD AND APPARATUS FOR ACTIVITY BASED EXECUTION SCHEDULING - A method and an apparatus for activity based execution scheduling are described. Activities may be tracked among a plurality of threads belonging to a plurality of processes running in one or more processors. Each thread may be associated with one of the activities. Each activity may be associated with one or more of the threads in one or more of the processes for a data processing task. The activities may be ordered by a priority order. A group of the threads may be identified to be associated with a particular one of the activities with highest priority based on the priority order. A thread may be selected from the identified threads for next scheduled execution in the processors. | 12-03-2015 |
20150347179 | PRIORITY-BASED MANAGING OF WINDOW PROCESSES IN A BROWSER APPLICATION - The method for managing a plurality of windows of a browser application on an electronic device includes assigning a priority level to each process, including the browser application, running on the device, and distributing computing resources based on priority level. In response to receiving an action to open a window, the browser application starts the execution of a process for opening the window, associates the process with the window, and assigns a priority level to the process associated with the window. The browser application then monitors an activity level of each process associated with its windows. If the activity level decreases, the browser application assigns the process with the decreased activity level to a lower priority level. If requested computing resources exceed a maximum threshold, a process is selected from the lowest priority level processes, and the selected process is suspended. | 12-03-2015 |
20150347180 | Sparse Threaded Deterministic Lock-Free Cholesky and LDLT Factorizations - Systems and methods are provided for implementing a sparse deterministic direct solver. The deterministic direct solver is configured to analyze a symmetric matrix by defining a plurality of dense blocks, identify at least one task for each of the dense blocks, and identify for each task any operations on which the task is dependent. The deterministic direct solver is further configured to store in a first data structure an entry for each of the dense blocks identifying whether a precondition must be satisfied before tasks associated with the dense blocks can be initiated, store in a second data structure a status value for each of the dense blocks and make the stored status values changeable by multiple threads, and assign a plurality of the tasks to a plurality of threads, wherein each thread is assigned a unique task, wherein each of the plurality of threads executes its assigned task when the status of the dense block corresponding to its assigned task indicates that the assigned task is ready to be performed and the precondition associated with the dense block has been satisfied if the precondition exists. | 12-03-2015 |
20150347186 | METHOD AND SYSTEM FOR SCHEDULING REPETITIVE TASKS IN O(1) - Systems and methods are disclosed for scheduling a plurality of tasks for execution on one or more processors. An example method includes obtaining a counter value of a counter. The method also includes for each work queue of a plurality of work queues, identifying an execution period of the respective work queue and comparing a counter value to an execution period of the respective work queue. Each work queue includes a set of tasks and is defined by an execution period at which to run the respective set of queued tasks. The method further includes selecting, based on the comparing, a subset of the plurality of work queues. The method also includes scheduling a set of tasks of slower frequency queued in a selected work queue for execution on one or more processors before a set of tasks queued in a non-selected work queue. The work items may be scheduled in O(1) because the design inherently prioritizes the tasks based on the urgency of their completion, and may do so by resetting a work queue pointer. | 12-03-2015 |
20150347189 | QUALITY OF SERVICE CLASSES - In one embodiment, tasks executing on a data processing system can be associated with a Quality of Service (QoS) classification that is used to determine the priority values for multiple subsystems of the data processing system. The QoS classifications are propagated when tasks interact and the QoS classes are interpreted a multiple levels of the system to determine the priority values to set for the tasks. In one embodiment, one or more sensors coupled with the data processing system monitor a set of system conditions that are used in part to determine the priority values to set for a QoS class. | 12-03-2015 |
20150347192 | METHOD AND SYSTEM FOR SCHEDULING THREADS FOR EXECUTION - Techniques for scheduling threads for execution in a data processing system are described herein. According to one embodiment, in response to a request for executing a thread, a scheduler of an operating system of the data processing system accesses a global run queue to identify a global run entry associated with the highest process priority. The global run queue includes multiple global run entries, each corresponding to one of a plurality of process priorities. A group run queue is identified based on the global run entry, where the group run queue includes multiple threads associated with one of the processes. The scheduler dispatches one of the threads that has the highest thread priority amongst the threads in the group run queue to one of the processor cores of the data processing system for execution. | 12-03-2015 |
20150347193 | WORKLOAD AUTOMATION AND DATA LINEAGE ANALYSIS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for workload automation and job scheduling information. One of the methods includes obtaining job dependency information, the job dependency information specifying an order of execution of a plurality of jobs. The method also includes obtaining data lineage information that identifies dependency relationships between data stores and transformation, wherein at least one transformation accepts data from a first data store and produces data for a second data store. The method also includes creating links between the job dependency information and the data lineage information. The method also includes determining an impact of a change in a planned execution of an application of the plurality of applications based on the job dependency information, the created links, and the data lineage information. | 12-03-2015 |
20150355942 | ENERGY-EFFICIENT REAL-TIME TASK SCHEDULER - An energy efficient task scheduler for use with a processor that provides multiple reduced energy use modes. In one embodiment, a system for executing tasks includes a processor and a task scheduler. The processor provides a plurality of different reduced energy use modes. The task scheduler is executable by the processor to schedule execution a plurality of sleep tasks. Each of the sleep tasks corresponds to a different one of the reduced energy use modes. The task scheduler is executable by the processor to execute each of the sleep tasks, and as part of the execution of the sleep task to: place the processor in the reduced energy use mode corresponding to the sleep task, and exit the corresponding reduced energy use mode at suspension of the sleep task. | 12-10-2015 |
20150355949 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 12-10-2015 |
20150363226 | RUN TIME ESTIMATION SYSTEM OPTIMIZATION - Methods, systems, and computer program products for training an optimized time estimation system for completing a data processing job to be run on a data processing device that operates within a distributed processing system having a range of platforms. Embodiments include creating a prediction algorithm based upon retrieved operational parameters associated with a data processing job. Embodiments also include retrieving further operational parameters associated with the data processing job. Embodiments include updating the prediction algorithm based on the further operational parameters, in which the prediction algorithm is updated by modifying parameter values associated with variable parameters of the prediction algorithm. | 12-17-2015 |
20150363229 | RESOLVING TASK DEPENDENCIES IN TASK QUEUES FOR IMPROVED RESOURCE MANAGEMENT - A database system comprises a database server and a database storage system comprising a storage processing node and a queue. The database server is operable to define a priority for each of a plurality of database tasks. The storage processing node is operable to receive database tasks from the database server and place them into the queue based upon their priority. The storage processing node is further operable to determine whether there are dependencies between a first database task and a second database task with a previously defined higher priority so that the storage processing node is operable to place the first database task into a same queue as the second database task. The second database task is dependent upon the first database task when an input of the second database task is waiting for an output of the first database task. | 12-17-2015 |
20150370600 | SYSTEM HAVING OPERATION QUEUES CORRESPONDING TO OPERATION EXECUTION TIME - A system and method for prioritized queues is provided. A plurality of queues are organized to enable long-running operations to be directed to a long running queue operation, while faster operations are directed to a non-long running operation queue. When an operation request is received, a determination is made whether it is a long-running operation, and, if so, the operation is placed in a long-running operation queue. When the processor core that is executing long-running operations is ready for the next operation, it removes an operation from the long-running operation queue and processes the operation. | 12-24-2015 |
20150370606 | METHOD FOR PRIORITIZING TASKS QUEUED AT A SERVER SYSTEM - An algorithm for assigning priorities to tasks queued for processing by users based on how heavily each task's user used the system resources in the past, including the number of tasks queued by the user in the past, the volume of these tasks, and the amount of processor time used. In the OCR context, the tasks are graphic files placed on servers and chosen for processing in accordance with the assigned priorities. | 12-24-2015 |
20150378782 | SCHEDULING OF TASKS ON IDLE PROCESSORS WITHOUT CONTEXT SWITCHING - Tasks may be scheduled on more than one processor to allow the processors to operate at lower processor frequencies and processor supply voltages. In particular, realtime tasks may be scheduled on idle processors without context switching an existing executing tasks. For example, a method of executing tasks on a plurality of processors may include receiving a new task with an earlier deadline than an executing task; determining whether an idle processor is available; and when an idle processor is available, executing the new task on the idle processor. | 12-31-2015 |
20160026503 | METHOD AND APPARATUS FOR IMPROVING APPLICATION PROCESSING SPEED IN DIGITAL DEVICE - A method and apparatus for improving application processing speed in a digital device which improve application processing speed for a digital device running in an embedded environment where processor performance may not be sufficiently powerful by detecting an execution request for an application, identifying a group to which the requested application belongs, among preset groups with different priorities and scheduling the requested application according to the priority assigned to the identified group, and executing the requested application based on the scheduling result. | 01-28-2016 |
20160034314 | METHOD OF COMPUTING LATEST START TIMES TO ALLOW REAL-TIME PROCESS OVERRUNS - A method is provided for allowing process overruns while guaranteeing satisfaction of various timing constraints. At least one latest start time for an uncompleted process is computed. If an uncompleted process does not start at its latest start time, then at least one of the predetermined constraints may not be satisfied. A timer is programmed to interrupt a currently executing process at a latest start time. In another embodiment, information about ordering of the end times of the process time slots in a pre-run-time schedule is used by a run-time scheduler to schedule process executions. Exclusion relations can be used to prevent simultaneous access to shared resources. Any process that does not exclude a particular process is able to preempt that particular process at any appropriate time at run-time, which increases the chances that a process will be able to overrun while guaranteeing satisfaction of various timing constraints. | 02-04-2016 |
20160034315 | INFORMATION PROCESSING SYSTEM, DEPLOYMENT METHOD, PROCESSING DEVICE, AND DEPLOYMENT DEVICE - An objective of the present invention is to construct a system in which a plurality of software components having dependencies are deployed dispersedly on a plurality of processing devices. | 02-04-2016 |
20160041847 | COMPOSITE TASK PROCESSOR - Technologies are generally described for systems, devices and methods effective to process a composite task to be applied to an ontology. In some examples, the methods may include a processor receiving a composite task. The methods may include the processor transforming the composite task into a set of atomic tasks. The set of atomic tasks may include at least a first atomic task, a second atomic task, and a third atomic task. The methods may include the processor determining that the first atomic task is equivalent to the second atomic task based on the ontology. The methods may include the processor removing the second atomic task from the set of atomic tasks to generate a list of atomic tasks. The methods may include the processor applying the list of atomic tasks to the ontology. | 02-11-2016 |
20160055035 | MULTIPLE SIMULTANEOUS REQUEST RESOURCE MANAGEMENT - A method for scheduling a plurality of resources for processing a plurality of requests is provided. The method sorts the requests, each specifying a priority and one or more resources that process the request, in parallel based on the priorities. The method initializes an output set to an empty set and filters out any request that has a resource conflict with a current highest priority request, adds the current highest priority request to the output set and determines whether one or more requests of the plurality of requests, other than the requests added to the output set, are not filtered out. Responsive to determining that the one or more requests are not filtered out, repeating filtering, adding, and determining by using a highest priority request of the one or more requests as a current highest priority request. The method causes the assigned resources to process the output set of requests in parallel. | 02-25-2016 |
20160055036 | SYSTEM AND METHOD TO CONTROL HEAT DISSIPATION THROUGH SERVICE LEVEL ANALYSIS - The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device. | 02-25-2016 |
20160062792 | METHOD AND TERMINAL DEVICE FOR CONTROLLING BACKGROUND APPLICATION - The present disclosure provides a method which includes: generating an application list according to applications running in an operating system; traversing the identifiers in the application list; determining whether an application corresponding to a currently traversed identifier is a background application; determining whether a predetermined white list comprises the currently traversed identifier and whether the number of identifiers corresponding to background applications in the application list is greater than a predetermined threshold, if the application corresponding to the currently traversed identifier is a background application; selecting an identifier corresponding to a background application from the application list and closing the background application corresponding to the selected identifier, if the predetermined white list comprises the currently traversed identifier and the number is greater than the predetermined threshold; or closing the application corresponding to the currently traversed identifier if the predetermined white list does not comprise the currently traversed identifier and the number is greater than the predetermined threshold. | 03-03-2016 |
20160070592 | SIGNAL PROCESSING DEVICE AND SEMICONDUCTOR DEVICE - A signal processing device includes a signal processor that receives an input stream including a plurality of pieces of input data, execute a predetermined task on stream data, and outputs an output stream including a plurality of pieces of output data. The signal processing device includes a pointer indicating position information of data in the stream data according to progress of processing by the signal processor. When priority processing of a second task is requested during execution of a first task, the signal processing device executes the second task after saving a value held by the pointer. Based on the saved pointer value the signal processing device obtains position information, in the output stream, of output data to be outputted in the first task, and obtains position information, in the input stream, of input data. | 03-10-2016 |
20160077870 | STARVATION CONTROL IN A DATA PROCESSING SYSTEM - A data processing system ( | 03-17-2016 |
20160085591 | APPARATUS AND SCHEDULING METHOD - An apparatus includes a memory and a processor coupled to the memory and configured to generate a first schedule relating to a plurality of tasks based on a first execution order information of the plurality of tasks prescribed by a plurality of pieces of task information, and generate a second schedule that relates to the plurality of tasks and includes contents different from contents of the first schedule based on a first notification relating to the first schedule. | 03-24-2016 |
20160092108 | Quality of Service Implementation in a Networked Storage System with Hierarchical Schedulers - Methods, systems, and computer programs are presented for allocating CPU cycles in a storage system. One method includes operations for receiving requests to be processed, and for associating each request to one task. A foreground task is for processing input/output requests, and the foreground task includes one or more flows. Each flow is associated with a queue and a flow counter value, where each queue is configured to hold requests. The method further includes an operation for selecting one task for processing by the CPU based on an examination of the number of cycles processed by the CPU for each task. When the selected task is the foreground task, the flow having the lowest flow counter is selected. The CPU processes a request from the queue of the selected flow, and the flow counter of the selected flow is increased based on the data consumption of the processed task. | 03-31-2016 |
20160092267 | CROSS-DOMAIN MULTI-ATTRIBUTE HASHED AND WEIGHTED DYNAMIC PROCESS PRIORITIZATION - In response to receipt of a process-level input request that is subject to business-level requirements, multiple sets of attributes are identified. The sets of attributes are each from one of multiple informational domains that represent processing factors associated with at least the process-level input request, contemporaneous infrastructure processing capabilities, and historical process performance of similar processes. The multiple sets of attributes from the multiple informational domains are hashed as a vector into an initial process prioritization. The attributes of the hashed vector of the multiple sets of attributes from the multiple informational domains are weighted in the initial process prioritization into a hashed-weighted resulting process prioritization. The process-level input request is assigned to a process category based upon the hashed-weighted resulting process prioritization. | 03-31-2016 |
20160092275 | TUNABLE COMPUTERIZED JOB SCHEDULING - A computer-implemented method for scheduling a set of jobs executed in a computer system can include determining a workload-time parameter for a set of at least one job. The workload-time parameter can relate to execution-time parameters for the set of at least one job. The method can include determining a schedule tuning parameter for the set of at least one job, the schedule tuning parameter based on the workload-time parameter. The method can include generating a scheduling factor for each job in the set, the scheduling factor generated based on the schedule tuning parameter. The method can include scheduling the set of at least one job based on the scheduling factor. | 03-31-2016 |
20160098300 | MULTI-CORE PROCESSOR SYSTEMS AND METHODS FOR ASSIGNING TASKS IN A MULTI-CORE PROCESSOR SYSTEM - A multi-core processor system and a method for assigning tasks are provided. The multi-core processor system includes a plurality of processor cores, configured to perform a plurality of tasks, and each of the tasks is in a respective one of a plurality of scheduling classes. The multi-core processor system further includes a task scheduler, configured to obtain first task assignment information about tasks in a first scheduling class assigned to the processor cores, obtain second task assignment information about tasks in one or more other scheduling classes assigned to the processor cores, and refer to the first task assignment information and the second task assignment information to assign a runnable task in the first scheduling class to one of the processor cores. | 04-07-2016 |
20160103709 | Method and Apparatus for Managing Task of Many-Core System - A method and an apparatus for managing and scheduling tasks in a many-core system are presented. The method improves process management efficiency in the many-core system. The method includes, when a process needs to be added to a task linked list, adding a process descriptor pointer of the process to a task descriptor entry corresponding to the process, and adding the task descriptor entry to the task linked list; if a process needs to be deleted, finding a task descriptor entry corresponding to the process, and removing the task descriptor entry from the task linked list; and when a processor core needs to run a new task, removing an available priority index register with a highest priority from a queue of the priority index register. | 04-14-2016 |
20160103713 | METHOD FOR SEQUENCING A PLURALITY OF TASKS PERFORMED BY A PROCESSING SYSTEM AND A PROCESSING SYSTEM FOR IMPLEMENTING THE SAME - A method for sequencing a plurality of tasks performed by a processing system and a processing system for implementing the same are disclosed herein. In one embodiment, a method for sequencing a plurality of tasks performed by a processing system is provided that includes generating a schedule by iteratively performing a scheduling process and processing a plurality of substrates using the plurality of semiconductor processing equipment stations according to the schedule. The scheduling process uses highly constrained tasks and determines whether a portion of the first list of the highly constrained tasks exceeds a capacity of the processing system. The scheduling process further includes updating the latest start time and the earliest start time associated with each of the plurality of tasks yet to be scheduled based on the assigned task. | 04-14-2016 |
20160124770 | TRANSPORTATION NETWORK MICRO-SIMULATION PRE-EMPTIVE DECOMPOSITION - In a parallel computing method performed by a parallel computing system comprising a plurality of central processing units (CPUs), a main process executes. Tasks are executed in parallel with the main process on CPUs not used in executing the main process. Results of completed tasks are stored in a cache, from which the main process retrieves completed task results when needed. The initiation of task execution is controlled by a priority ranking of tasks based on at least probabilities that task results will be needed by the main process and time limits for executing the tasks. The priority ranking of tasks is from the vantage point of a current execution point in the main process and is updated as the main process executes. An executing task may be pre-empted by a task having higher priority if no idle CPU is available. | 05-05-2016 |
20160124776 | PROCESS FOR CONTROLLING A PROCESSING UNIT IMPROVING THE MANAGEMENT OF THE TASKS TO BE EXECUTED, AND CORRESPONDING PROCESSING UNIT - A process controls a processing unit in the presence of a task being executed by the processing unit. The processing unit includes at least one external input electrically connected to a corresponding output of the processing unit, and is associated with a level of priority of execution. The process includes, in the presence of an auxiliary-task request generated internally within the processing unit, generation by the processing unit of an auxiliary electrical signal corresponding to the request for execution of the auxiliary task. The auxiliary electrical signal is relayed to the at least one external input. A comparison is made between the priority levels respectively associated with the at least one external input and with the task being executed. | 05-05-2016 |
20160132329 | PARALLEL PROCESSING IN HARDWARE ACCELERATORS COMMUNICABLY COUPLED WITH A PROCESSOR - In an embodiment, a device including a processor, a plurality of hardware accelerator engines and a hardware scheduler is disclosed. The processor is configured to schedule an execution of a plurality of instruction threads, where each instruction thread includes a plurality of instructions associated with an execution sequence. The plurality of hardware accelerator engines performs the scheduled execution of the plurality of instruction threads. The hardware scheduler is configured to control the scheduled execution such that each hardware accelerator engine is configured to execute a corresponding instruction and the plurality of instructions are executed by the plurality of hardware accelerator engines in a sequential manner. The plurality of instruction threads are executed by plurality of hardware accelerator engines in a parallel manner based on the execution sequence and an availability status of each of the plurality of hardware accelerator engines. | 05-12-2016 |
20160132355 | PROCESS GROUPING FOR IMPROVED CACHE AND MEMORY AFFINITY - A multiprocessor computer system and method for use therein are provided for assigning processes to processor nodes. The system can determine a first pair of processes and a second pair of processes, each process of the first pair of processes executing on different nodes and each process of the second pair of processes executing on different nodes. The system can determine a first priority value of the first pair of processes, based at least in part on a first resource access rate of the first pair of processes; and determine a second priority value of the second pair of processes, based at least in part on a second resource access rate of the second pair of processes. The system can determine the first priority value is greater than the second priority value; and determine to reassign a first process of the first pair of processes to a first node, wherein a second process of the first pair of processes is executing on the first node. | 05-12-2016 |
20160132363 | Migrating Processes Operating On One Platform To Another Platform In A Multi-Platform System - Embodiments of the claimed subject matter are directed to methods and a system that allows the optimization of processes operating on a multi-platform system (such as a mainframe) by migrating certain processes operating on one platform to another platform in the system. In one embodiment, optimization is performed by evaluating the processes executing in a partition operating under a proprietary operating system, determining a collection of processes from the processes to be migrated, calculating a cost of migration for rating the collection of processes, prioritizing the collection of processes in an order of migration and incrementally migrating the processes according to the order of migration to another partition in the mainframe executing a lower cost (e.g., open-source) operating system. | 05-12-2016 |
20160139950 | SHARING RESOURCES IN A MULTI-CONTEXT COMPUTING SYSTEM - In an embodiment, a method of providing quality of service (QoS) to at least one resource of a hardware processor includes providing, in a memory of the hardware processor, a context including at least one quality of service parameter and allocating access to the at least one resource of the hardware processor based on the quality of service parameter of the context, a device identifier, a virtual machine identifier, and the context. | 05-19-2016 |
20160139952 | Throttle Control on Cloud-based Computing Tasks - Systems and methods for throttle control on cloud-based computing tasks are provided. An example method includes, obtaining a service request from a first user, in a plurality of users, of the computer system; in accordance with a first determination that placing the service request in a service queue associated with the first user would not cause an enqueue counter associated with the first user to be exceeded, causing the service request to be placed in the service quest to await execution. The method also includes, after the service request is placed in the service queue, in accordance with a second determination that executing the service request would not cause a dequeue counter associated with the first user to be exceeded, causing the service request to be executed. | 05-19-2016 |
20160139953 | PREFERENTIAL CPU UTILIZATION FOR TASKS - In a computing storage environment having multiple processor devices, lists of Task Control Blocks (TCBs) are maintained in a processor-specific manner, such that each of the multiple processor devices is assigned a local TCB list. | 05-19-2016 |
20160139954 | QUIESCE HANDLING IN MULTITHREADED ENVIRONMENTS - Methods and apparatuses for performing a quiesce operation in a multithread environment is provided. A processor receives a first thread quiesce request from a first thread executing on the processor. A processor sends a first processor quiesce request to a system controller to initiate a quiesce operation. A processor performs one or more operations of the first thread based, at least in part, on receiving a response from the system controller. | 05-19-2016 |
20160139955 | QUIESCE HANDLING IN MULTITHREADED ENVIRONMENTS - Methods and apparatuses for performing a quiesce operation in a multithread environment is provided. A processor receives a first thread quiesce request from a first thread executing on the processor. A processor sends a first processor quiesce request to a system controller to initiate a quiesce operation. A processor performs one or more operations of the first thread based, at least in part, on receiving a response from the system controller. | 05-19-2016 |
20160139959 | INFORMATION PROCESSING SYSTEM, METHOD AND MEDIUM - An information processing system includes: a memory configured to store job requests each of which is to be assigned to one of computing resources based on a priority which is determined by an allocation ratio assigned for each of a plurality of users; and processing circuitry configured to: assign first job request to the one of the computing resources; determine, when the first job request is assigned, a decrease degree of the priority of a first user corresponding to the first job request based on an allocation ratio of the first user and allocation ratio of other users whose job requests are stored in the memory; modify the priority of the first user based on the determined decrease degree of the priority; and assign second job request to one of the computing resources, based on the modified priority of the first user and priority of remaining plurality of users. | 05-19-2016 |
20160139960 | SYSTEM, METHOD, PROGRAM, AND CODE GENERATION UNIT - A system for parallel processing tasks by allocating the use of exclusive locks to process critical sections of a task. The system includes storing update information that is updated in response to acquisition and release of an exclusive lock. When processing a task which includes a critical section containing code affecting execution of the other task, an exclusive execution unit acquires an exclusive lock prior to processing the critical section. When the section has been processed successfully, the lock is released and update information updated. Meanwhile a second task, whose critical section does not contain code affecting execution of the other task may run in parallel, without acquiring an exclusive lock, via a nonexclusive execution unit. The nonexclusive execution unit determines that the second critical section has successfully completed if the update information has not changed during processing of the second critical section. | 05-19-2016 |
20160147564 | APPARATUS AND METHOD FOR ALLOCATING RESOURSES USING PRIORITIZATION OF REQUESTS AND UPDATING OF REQUESTS - A system and method for allocating resources receive one or more resource requests describing tasks, each of the one or more resource requests having a request priority, a requested configuration type, and a requestor identifier. In a winner-take-all circuit, all of the existing resource priorities within each configuration of the requested configuration type are compared to determine a highest-priority task occupying each assignment. In a loser-take-all circuit, one or more current highest resource priorities of each configuration within the requested configuration type, which are output from the winner-take-all circuit associated with the requested resource assignment, each of the one or more current resources having a current priority, are compared. One of the one or more current resource configurations within the requested configuration type having the lowest current priority is identified as the lowest-priority current resource configuration. The requested configuration type is allocated to the selected resource request if the request priority is higher than the lowest current priority configuration output from the loser-take-all circuit. The method further comprises continuing to allocate the requested configuration type to the lowest-priority current resource tasks currently occupying the lowest current priority configuration within the requested configuration if the lowest current priority configuration within the requested configuration is higher than or equal to the request priority. | 05-26-2016 |
20160147565 | INTERACTIONS WITH CONTEXTUAL AND TASK-BASED COMPUTING ENVIRONMENTS - Concepts and technologies are described herein for interacting with contextual and task-focused computing environments. Tasks associated with applications are described by task data. Tasks and/or batches of tasks relevant to activities occurring at a client are identified, and a UI for presenting the tasks is generated. The UIs can include tasks and workflows corresponding to batches of tasks. Workflows can be executed, interrupted, and resumed on demand. Interrupted workflows are stored with data indicating progress, contextual information, UI information, and other information. The workflow is stored and/or shared. When execution of the workflow is resumed, the same or a different UI can be provided, based upon the device used to resume execution of the workflow. Thus, multiple devices and users can access workflows in parallel to provide collaborative task execution. | 05-26-2016 |
20160147566 | Cross-Platform Scheduling with Long-Term Fairness and Platform-Specific Optimization - Methods, systems, and computer program products for cross-platform scheduling with fairness and platform-specific optimization are provided herein. A method includes determining dimensions of a set of containers in which multiple tasks associated with a request are to be executed; assigning each of the containers to a processing node on one of multiple platforms based on the dimensions of the given container, and to a platform owner selected from the multiple platforms based on a comparison of resource requirements of each of the multiple platforms and the dimensions of the given container; and generating container assignments across the set of containers by incorporating the assigned node of each container in the set of containers, the assigned platform owner of each container in the set of containers, one or more scheduling requirements of each of the platforms, one or more utilization objectives, and enforcing a sharing guarantee of each of the platforms. | 05-26-2016 |
20160162331 | Prioritizing Cloud-based Computing Tasks - Systems and methods for prioritizing cloud-based computing tasks are provided. An example method includes, identifying a first plurality of service requests submitted by a plurality of users including a first user; selecting a first service request, in the plurality of service requests, in accordance with a first priority, where the first service request is submitted by the first user; selecting a second service request submitted by the first user, in a second plurality of service requests submitted by the first user, in accordance with a second priority, where the second service request is associated with a first job type; and selecting a third service request submitted by the first user, in a third plurality of service requests submitted the first user, in accordance with a third priority, where the third plurality of service requests submitted the first user are associate with a same job type. | 06-09-2016 |
20160162335 | Dynamic Computing Resource Management - Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation. | 06-09-2016 |
20160170798 | Recording CPU Time for Sample Of Computing Thread Based On CPU Use State Of Activity Associated With The Sample | 06-16-2016 |
20160170807 | PROCESSOR AND COMMAND PROCESSING METHOD PERFORMED BY SAME | 06-16-2016 |
20160179571 | METHOD AND DEVICE FOR SCHEDULING COMMUNICATION SCHEDULABLE UNIT | 06-23-2016 |
20160179572 | METHOD AND APPARATUS FOR SELECTING PREEMPTION TECHNIQUE | 06-23-2016 |
20160179575 | ADAPTIVE PARTITIONING FOR OPERATING SYSTEM | 06-23-2016 |
20160179578 | MULTIPLE STAGE WORKLOAD MANAGEMENT SYSTEM | 06-23-2016 |
20160188366 | Preemptive Operating System Without Context Switching - A device, such as a constrained device that includes a processing device and memory, schedules user-defined independently executable functions to execute from a single stack common to all user-defined independently executable functions according to availability and priority of the user-defined independently executable functions relative to other user-defined independently executable functions and preempts currently running user-defined independently executable function by placing the particular user-defined independently executable function on a single stack that has register values for the currently running user-defined independently executable function. | 06-30-2016 |
20160188368 | TASK PROCESSING UTILIZING QUEUES - A system includes a plurality of queues configured to hold tasks and state information associated with such tasks. The system further includes a plurality of listeners configured to query one of the plurality of queues for a task, receive, in response to querying one of the plurality of queues for a task, a task together with state information associated with the task, effect processing of the received task, and communicate a result of the received task to another queue of the plurality of queues, the another queue of the plurality of queues being selected based on the processing of the received task. | 06-30-2016 |
20160188372 | BINARY TRANSLATION FOR MULTI-PROCESSOR AND MULTI-CORE PLATFORMS - Technologies for partial binary translation on multi-core platforms include a shared translation cache, a binary translation thread scheduler, a global installation thread, and a local translation thread and analysis thread for each processor core. On detection of a hotspot, the thread scheduler first resumes the global thread if suspended, next activates the global thread if a translation cache operation is pending, and last schedules local translation or analysis threads for execution. Translation cache operations are centralized in the global thread and decoupled from analysis and translation. The thread scheduler may execute in a non-preemptive nucleus, and the translation and analysis threads may execute in a preemptive runtime. The global thread may be primarily preemptive with a small non-preemptive nucleus to commit updates to the shared translation cache. The global thread may migrate to any of the processor cores. Forward progress is guaranteed. Other embodiments are described and claimed. | 06-30-2016 |
20160188373 | SYSTEM MANAGEMENT METHOD, MANAGEMENT COMPUTER, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - A system management method for a management computer coupled to a computer system, the computer system including a plurality of computers, an operations system being built thereon the computer system, the operations system including a plurality of task nodes each having allocated thereto computer resources, the system management method including: a step of analyzing a configuration of the computer system for specifying a important node, which is an important task node in the operations system; a step of changing an allocation amount of the computer resources allocated to the important node for measuring a load of the operations system; a step of calculating a first weighting representing a strength of associations among the plurality of task nodes based on a measurement result of the load; and a step of specifying a range impacted by a change in the load of the important node based on the calculated first weighting. | 06-30-2016 |
20160253214 | EFFICIENT PARALLEL PROCESSING OF A NETWORK WITH CONFLICT CONSTRAINTS BETWEEN NODES | 09-01-2016 |
20160253216 | ORDERING SCHEMES FOR NETWORK AND STORAGE I/O REQUESTS FOR MINIMIZING WORKLOAD IDLE TIME AND INTER-WORKLOAD INTERFERENCE | 09-01-2016 |
20160378558 | COORDINATING MULTIPLE COMPONENTS - A system and method including: determining, by a manager module, a need to determine a primary software component of a client device; identifying a first software component and a second software component of the client device; identifying a set of characteristics of the first software component and the second software component; determining that the first software component is the primary software component based on the set of characteristics of each software component, where determining the primary software component further includes comparing the set of characteristics of each software component and selecting the primary software component based on the set of characteristics with a highest priority; and instructing, by the manager module, the one or more processors to cause functionality associated with the second software component to be at least partially suspended. | 12-29-2016 |
20160378561 | JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT - According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. Resource requirements for the jobs are determined, and candidate mega-hosts are identified for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs. | 12-29-2016 |
20160378562 | JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT - According to one aspect of the present disclosure, a technique for job distribution within a grid environment includes receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters where each execution cluster includes one or more execution hosts. Resource attributes are determined corresponding to each execution host of the execution clusters. For each execution cluster, execution hosts are grouped based on the resource attributes of the respective execution hosts. For each grouping of execution hosts, a mega-host is defined for the respective execution cluster where the mega-host for a respective execution cluster defines resource attributes based on the resource attributes of the respective grouped execution hosts. Resource requirements for the jobs are determined, and candidate mega-hosts are identified for the jobs based on the resource attributes of the respective mega-hosts and the resource requirements of the jobs. | 12-29-2016 |
20170235605 | SYSTEM AND METHOD FOR IMPLEMENTING CLOUD BASED ASYNCHRONOUS PROCESSORS | 08-17-2017 |
20170235725 | SYSTEMS AND METHODS FOR QUERY QUEUE OPTIMIZATION | 08-17-2017 |