Entries |
Document | Title | Date |
20080209437 | MULTITHREADED MULTICORE UNIPROCESSOR AND A HETEROGENEOUS MULTIPROCESSOR INCORPORATING THE SAME - A uniprocessor that can run multiple threads (programs) simultaneously is achieved by use of a plurality of low-frequency minicore processors, each minicore for receiving a respective thread from a high-frequency cache and processing the thread. A superscalar processor may be used in conjunction with the uniprocessor to process threads requiring high throughput. | 08-28-2008 |
20080222649 | Method and computer program for managing man hours of multiple individuals working one or more tasks - A method and computer program are provided for managing man hours of multiple individuals working one or more tasks during a predefined time period that include selectively opening a plurality of tasks of differing task characteristic and task type, selectively associating one or more individuals to one of the open plurality of tasks, selectively unassociating at least one of the associated one or more individuals, maintaining at least one timer for each of the open plurality of tasks, selectively closing one or more of the open plurality of tasks, and selectively outputting an invoice for the closed plurality of tasks based on the bid price of each of the open plurality of tasks. One or more of the individuals are associated and unassociated prior to completion of the open one or more tasks. The at least one timer maintains a total time for all of the associated one or more individuals for each of the open plurality of tasks. | 09-11-2008 |
20080235707 | Data processing apparatus and method for performing multi-cycle arbitration - A data processing apparatus and method are provided for arbitrating between multiple access requests seeking to access a plurality of resources sharing a common access path. At least one logic element issues access requests requesting access to the resources, and each access request identifies which of the resources is to be accessed. Arbitration circuitry performs a multi-cycle arbitration operation to arbitrate between multiple access requests to be passed over the common access path, the arbitration circuitry having a plurality of pipeline stages to allow a corresponding plurality of multi-cycle arbitration operations to be in progress at any one time. Filter circuitry is provided which has a plurality of filter states, the number of filter states being dependent on the number of pipeline stages of the arbitration circuitry, and each resource being associated with one of the filter states. For a new multi-cycle arbitration operation to be performed by the arbitration circuitry, the filter circuitry selects one of the filter states that has not been selected for any other multi-cycle arbitration operation already in progress within the pipeline states of the arbitration circuitry. Then, it determines as candidate access requests for the new multi-cycle arbitration operation those access requests that are seeking to access a resource associated with the selected filter state. Such an approach allows efficient multi-cycle arbitration to take place even where the resources may have inter-access timing parameters associated therewith which prevent them from being able to receive access requests every clock cycle. | 09-25-2008 |
20080250422 | EXECUTING MULTIPLE THREADS IN A PROCESSOR - Provided are a method, system, and program for executing multiple threads in a processor. Credits are set for a plurality of threads executed by the processor. The processor alternates among executing the threads having available credit. The processor decrements the credit for one of the threads in response to executing the thread and initiates an operation to reassign credits to the threads in response to depleting all the thread credits. | 10-09-2008 |
20080271041 | PROGRAM PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - According to one embodiment, a program processing method includes converting parallel execution control description into graph data structure generating information, extracting a program module based on preceding information included in the graph data structure generating information when input data is given, generating a node indicating an execution unit of the program module for the extracted program module, adding the generated node to a graph data structure configured based on preceding and subsequent information defined in the graph data structure generating information, executing a program module corresponding to a node included in a graph data structure existing at that time, by setting values for the parameter, based on performance information of the node when all nodes indicating a program module defined in the preceding information have been processed, and obtaining and saving performance information of the node when a program module corresponding to the node has been executed. | 10-30-2008 |
20080301698 | SERVICE ENGAGEMENT MANAGEMENT USING A STANDARD FRAMEWORK - A solution for managing a service engagement is provided. A service delivery model for the service engagement is defined within an engagement framework. The engagement framework, and consequently the service delivery model, can include a hierarchy that comprises a service definition, a set of service elements for the service definition, and a set of element tasks for each service element. The set of element tasks can be selected from a set of base tasks, each of which defines a particular task along with its input(s), output(s), and related asset(s). As a result, service engagements can be managed in a consistent manner using a data structure that promotes reuse and is readily extensible. | 12-04-2008 |
20080301699 | Apparatus and methods for workflow management and workflow visibility - A system for viewing and managing work flow. The system includes at least one processor and memory configured to track time requirements for each of a plurality of jobs, compile and display the time requirements relative to current time in a plurality of managerial-level views, and in each view, indicate status of the jobs relative to the time requirements. | 12-04-2008 |
20090007135 | APPARATUS AND METHOD FOR SERVER NETWORK MANAGEMENT TO PROVIDE ZERO PLANNED RETROFIT DOWNTIME - Methods and systems are presented for updating software applications in a processor cluster, in which the cluster is divided into first and second processor groups and the first group is isolated from clients and from the second group with respect to network and cluster communications by application of IP filters. The first group of processors is updated or retrofitted with the new software and brought to a ready-to-run state while the second group is active to serve clients. The first group is then transitioned to an in-service state after isolating the then-active service providing application on second group. Thereafter, the second group of processors is offlined, updated or retrofitted, and transitioned to an in-service state to complete the installation of the new application version across the cluster with reduced or zero downtime and without requiring backward software compatibility. | 01-01-2009 |
20090007136 | Time management control method for computer system, and computer system - In a time management control method of a computer system for managing each individual time of a plurality of virtual systems, a service process or retains an overall system time and a difference time between the overall system time and a virtual system time for each virtual system, and a firmware in the virtual system acquires the overall system time and the difference time, calculates a difference time between the overall system time and the change time of the virtual system, adds the both difference times, and informs the service processor. Accordingly, the virtual system time can be changed without time management hardware in each virtual system. Further, since service processor performs update processing only, it is also possible to prevent a time set error caused by delayed calculation processing etc. | 01-01-2009 |
20090019451 | ORDER-RELATION ANALYZING APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT THEREOF - An order-relation analyzing apparatus collects assigned destination processor information, a synchronization process order and synchronization information, determines a corresponding element associated with a program among a plurality of elements indicating an ordinal value of the program based on the assigned destination processor information, when an execution of the program is started, and calculates the ordinal value indicated by the corresponding element for each segment based on the synchronization information, when the synchronization process occurs while executing the program. When a first corresponding element associated with a second program, of which the execution starts after the execution of a first program associated with the first corresponding element finishes, is determined, the ordinal value of the second program is calculated by calculating the ordinal value indicated by the first corresponding element. | 01-15-2009 |
20090037926 | METHODS AND SYSTEMS FOR TIME-SHARING PARALLEL APPLICATIONS WITH PERFORMANCE ISOLATION AND CONTROL THROUGH PERFORMANCE-TARGETED FEEDBACK-CONTROLLED REAL-TIME SCHEDULING - Certain embodiments of the present invention provide systems and method for time-sharing parallel applications with performance isolation and control through feedback-controlled real-time scheduling. Certain embodiments provide a computing system for time-sharing parallel applications. The system includes a controller adapted to determine a scheduling constraint for each thread of execution for an application based at least in part on a target execution rate for the application. The system also includes a local scheduler executing on a node in the computing system. The local scheduler schedules execution of a thread of execution for the application based on the scheduling constraint received from the controller. The local scheduler provides feedback regarding a current execution rate for the application thread to the controller, and the controller modifies the scheduling constraint for the local scheduler based on the feedback. | 02-05-2009 |
20090044198 | Method and Apparatus for Call Stack Sampling in a Data Processing System - A computer implemented method, apparatus, and computer usable program code for sampling call stack information. An event is monitored during execution of a plurality of threads executed by a plurality of processors. In response to an occurrence of the event, a thread is identified in the plurality of threads to form an identified thread. A plurality of sampling threads is woken, wherein a sampling thread within the plurality of sampling threads is associated with each processor in the plurality of processors and wherein one sampling thread in the plurality of sampling threads obtains call stack information for the identified thread. | 02-12-2009 |
20090083753 | DYNAMIC THREAD GENERATION AND MANAGEMENT FOR IMPROVED COMPUTER PROGRAM PERFORMANCE - The performance of an executing computer program is dynamically enhanced by creating one or more additional threads of execution and then intercepting function calls generated by the executing computer program and executing such function calls within one of the one or more additional threads. Each thread may be associated with a different processing resource, thereby allowing for concurrent execution of the multiple threads. This technique may be used, for example, to improve the performance of a single-threaded computer program, such as a single-threaded video game program, by allowing multi-threaded techniques to be used to execute the computer program even though the computer program was not designed to use such techniques. | 03-26-2009 |
20090089795 | Information processing apparatus, control method of information processing apparatus, and control program of information processing apparatus - According to an embodiment of the invention, a computer readable storage medium that stores a software program causing a computer system to perform a scheduling process for executing a plurality of application programs in every processor cycles, the scheduling process includes: allocating, during a current processor cycle, processor times of a next processor cycle to each of the application programs to be executed in the next processor cycle; storing the allocated processor times of the next processor cycle; determining whether or not the application programs executed in the current processor cycle include an uncompletable application program; calculating processor idle time of the next processor cycle; and allocating an additional processor time of the next processor cycle to the uncompletable application program, the additional processor time being set not to exceed the calculated processor idle time of the next processor cycle. | 04-02-2009 |
20090144747 | COMPUTATION OF ELEMENTWISE EXPRESSION IN PARALLEL - An exemplary embodiment provides methods, systems and mediums for executing arithmetic expressions that represent elementwise operations. An exemplary embodiment provides a computing environment in which elementwise expressions may be executed in parallel by multiple execution units. In an exemplary embodiment, multiple execution units may reside on a network. | 06-04-2009 |
20090144748 | METHODS AND APPARATUS FOR PARALLEL PIPELINING AND WIDTH PROCESSING - Computer apparatus for use with a database management system and database, the apparatus comprising a CPU and a memory, the apparatus configured to provide at least two task processes each process being apportioned a section of the memory when is use, wherein in response to the database management system or apparatus being instructed to carry out a first task, such as reading, and a second task, such as decryption, on a section of data in series, a first task process is configured to begin the first task on a first part of the section of data in the database and (after a the first process on the first part of the section of the data is complete); a second task process is instructed to carry out the first task on a second part of the section of data which begins where the first part ends, and when the first task is complete and the first task process switched to carry out the second task on data on which the first task has already been carried out, or the second process is instructed to carry out the second task on the first part whilst the first process switches to carry out the first task on the second part of the data, or the second task process is instructed to carry out the first task on a second part of the section of data the first task process is switched to pipeline the second task to a third task process. | 06-04-2009 |
20090165016 | Method for Parallelizing Execution of Single Thread Programs - A method and apparatus for speculatively executing a single threaded program within a multi-core processor which includes identifying an idle core within the multi-core processor, performing a look ahead operation on the single thread instructions to identify speculative instructions within the single thread instructions, and allocating the idle core to execute the speculative instructions. | 06-25-2009 |
20090210882 | SYSTEM AND METHODS FOR ASYNCHRONOUSLY UPDATING INTERDEPENDENT TASKS PROVIDED BY DISPARATE APPLICATIONS IN A MULTI-TASK ENVIRONMENT - A computer-based system for updating interdependent tasks in a multi-task environment is provided. The system includes one or more processors for processing processor-executable code and an input/output interface communicatively linked to at least one processor. The system further includes a brokering module configured to execute on the at least one processor. The brokering module can be configured to interconnect a plurality of event-responsive interdependent tasks in response to an event generated while one of the tasks is being processed. Different tasks can be provided by different applications. The brokering module is configured to initiate an asynchronous updating of the tasks, wherein the asynchronous updating comprises a background process of the multi-task environment preformed for each task not being currently processed and wherein the updating is performed while the one task is being processed. The brokering module, moreover, is further configured to provide through the interface a status notification of the updating of each of the tasks. | 08-20-2009 |
20090271800 | SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DERIVING INTELLIGENCE FROM ACTIVITY LOGS - Techniques for segregating one or more logs of at least one multitasking user to derive at least one behavioral pattern of the at least one multitasking user are provided. The techniques include obtaining at least one of at least one action log, configuration information, domain knowledge, at least one task history and open task repository information, correlating the at least one of at least one action log, configuration information, domain knowledge, at least one task history and open task repository information to determine a task associated with each of one or more actions and segregate the one or more logs based on the one or more actions, and using the one or more logs that have been segregated to derive at least one behavioral pattern of the at least one multitasking user. Techniques are also provided for deriving intelligence from at least one activity log of at least one multitasking user to provide information to the at least one user. | 10-29-2009 |
20090276788 | INFORMATION PROCESSING APPARATUS - In an information processing apparatus according to the present invention, a control unit notifies each application program of a key input event in a multi-window system. If the state of a first application program is inactive, the control unit determines whether or not the event notified to the first application program is a key input event caused by a key other than an active switching key. If it is determined that the event is a key input event caused by a key other than the active switching key, the control unit causes a clock circuit to time a predetermined time period, and performs control so as to omit part of processing by the first application program, or to provide a predetermined wait time in between the processing by the first application program, until the predetermined time period is timed out. | 11-05-2009 |
20090282418 | Method and system for integrated scheduling and replication in a grid computing system - A method for scheduling a plurality of computation jobs to a plurality of data processing units (DPUs) in a grid computing system | 11-12-2009 |
20090288097 | METHOD AND SYSTEM FOR CONCURRENTLY EXECUTING AN APPLICATION - A method for executing an application, that includes instantiating, by a first thread, a first executable object and a second executable object, creating a first processing unit and a second processing unit, instantiating an executable container object, spawning a second thread, associating the first executable object and the second executable object with the executable container object, processing the executable container object to generate a result, and storing the result. Processing the executable container object includes associating the first executable object with the first processing unit, and associating the second executable object with the second processing unit, wherein the first thread processes executable objects associated with the first processing unit, wherein the second thread processes executable objects associated with the second processing unit, and wherein the first thread and the second thread execute concurrently. | 11-19-2009 |
20090300645 | Virtualization with In-place Translation - In a computing system having virtualization software including a guest operating system (OS), a method for executing guest OS instructions that includes: replacing each of one or more guest OS instructions with: (a) a translated instruction, which translated instruction is a one-to-one translation, or (b) a trap instruction. | 12-03-2009 |
20090307707 | SYSTEM AND METHOD FOR DYNAMICALLY ADAPTIVE MUTUAL EXCLUSION IN MULTI-THREADED COMPUTING ENVIRONMENT - A system and associated method for mutually exclusively executing a critical section by a process in a computer system. The critical section accessing a shared resource is controlled by a lock. The method measures a detection time when a lock contention is detected, a wait time representing a duration of wait for the lock at each failed attempt to acquire the lock, and a delay representing a total lapse of time from the detection time till the lock is acquired. The delay is logged and used to calculate an average delay, which is compared with a suspension overhead time of the computer system on which the method is executed to determine whether to spin or to suspend the process while waiting for the lock to be released. | 12-10-2009 |
20090328058 | PROTECTED MODE SCHEDULING OF OPERATIONS - The present invention extends to methods, systems, and computer program products for protected mode scheduling of operations. Protected mode (e.g., user mode) scheduling can facilitate the development of programming frameworks that better reflect the requirements of the workloads through the use of workload-specific execution abstractions. In addition, the ability to define scheduling policies tuned to the characteristics of the hardware resources available and the workload requirements has the potential of better system scaling characteristics. Further, protected mode scheduling decentralizes the scheduling responsibility by moving significant portions of scheduling functionality from supervisor mode (e.g., kernel mode) to an application. | 12-31-2009 |
20100011372 | Method and system for synchronizing the execution of a critical code section - The invention concerns a method for synchronizing the execution of at least one critical code section (C | 01-14-2010 |
20100031269 | Lock Contention Reduction - Illustrative embodiments provide a computer implemented method, a data processing system and a computer program product for lock contention reduction. In one illustrative embodiment, the computer implemented method provides a lock to an active thread, increments a lock counter, receives a request to de-schedule the active thread, and determines whether the lock is held by the active thread. The computer implemented method, responsive to a determination that the lock is held by the active thread, adds a first pre-determined amount to a time slice of the active thread. | 02-04-2010 |
20100031270 | HEAP MANAGER FOR A MULTITASKING VIRTUAL MACHINE - A multitasking virtual machine is described. The multitasking virtual machine may comprise an execution engine to concurrently execute a plurality of tasks. The multitasking virtual machine may further comprise a heap organization coupled to the execution engine. The heap organization may comprise a system heap to store system data accessible by the plurality of tasks; and a plurality of task heaps. Each of the plurality of task heaps may be assigned to each of the plurality of tasks to store task data accessible by the assigned task. The multitasking virtual machine may further comprise a heap manager to manage the heap organization. The heap manager may comprise a heap size controller to control heap size of the system heap. | 02-04-2010 |
20100037234 | DATA PROCESSING SYSTEM AND METHOD OF TASK SCHEDULING - A data processing system in a multi-tasking environment is provided. The data processing system comprises at least one processing unit ( | 02-11-2010 |
20100050184 | MULTITASKING PROCESSOR AND TASK SWITCHING METHOD THEREOF - A multitasking processor and a task switching method thereof are provided. The task switching method includes following steps. A first task is executed by the multitasking processor, wherein the first task contains a plurality of switching-point instructions. An interrupt event occurs. Accordingly, the multitasking processor temporarily stops executing the first task and starts to execute a second task. The multitasking processor executes a handling process of the interrupt event and sets a switching flag. After finishing the handling process of the interrupt event, the multitasking processor does not perform task switching but continues to execute the first task, and the multitasking processor only performs task switching to execute the second task when it reaches a switching-point instruction in the first task. | 02-25-2010 |
20100095305 | SIMULTANEOUS MULTITHREAD INSTRUCTION COMPLETION CONTROLLER - In a system that executes a program by simultaneously running a plurality of threads, the entries in a CSE | 04-15-2010 |
20100095306 | ARITHMETIC DEVICE - An arithmetic device simultaneously processes a plurality of threads and may continue the process by minimizing the degradation of the entire performance although a hardware error occurs. An arithmetic device | 04-15-2010 |
20100107175 | INFORMATION PROCESSING APPARATUS - In a cellular phone applicable to an information processing apparatus according to the present invention, a CPU of a main control unit executes monitor threads | 04-29-2010 |
20100115529 | Memory management apparatus and method - A memory management apparatus and a memory management method may divide an external memory area assigned to a task into a first area and a second area, and load data stored in the first area into an internal memory of a processor while the task is performed by the processor. | 05-06-2010 |
20100122263 | METHOD AND DEVICE FOR MANAGING THE USE OF A PROCESSOR BY SEVERAL APPLICATIONS, CORRESPONDING COMPUTER PROGRAM AND STORAGE MEANS - A method of managing processor usage time includes: associating each application with a slice of the processor time and with a first or second class; and managing the processor time as a function of the processor time slices and classes. The processor time slice associated with an application of the first class is reserved for the application even if the application does not use it fully. An application of the second class has priority for using the processor during its associated time slice, wherein if part of the associated time slice is not used by the application, the unused part may be used by another application of the second class, the application being able to use more than its associated time slice by using an unused part of a time slice associated with another application of the second class or a part of a time slice associated with no application. | 05-13-2010 |
20100138841 | System and Method for Managing Contention in Transactional Memory Using Global Execution Data - Transactional Lock Elision (TLE) may allow threads in a multi-threaded system to concurrently execute critical sections as speculative transactions. Such speculative transactions may abort due to contention among threads. Systems and methods for managing contention among threads may increase overall performance by considering both local and global execution data in reducing, resolving, and/or mitigating such contention. Global data may include aggregated and/or derived data representing thread-local data of remote thread(s), including transactional abort history, abort causal history, resource consumption history, performance history, synchronization history, and/or transactional delay history. Local and/or global data may be used in determining the mode by which critical sections are executed, including TLE and mutual exclusion, and/or to inform concurrency throttling mechanisms. Local and/or global data may also be used in determining concurrency throttling parameters (e.g., delay intervals) used in delaying a thread when attempting to execute a transaction and/or when retrying a previously aborted transaction. | 06-03-2010 |
20100138842 | Multithreading And Concurrency Control For A Rule-Based Transaction Engine - The subject matter disclosed herein provides methods and apparatus, including computer program products for rules-based processing. In one aspect there is provided a method. The method may include, for example, evaluating rules to determine whether to enable or disable one or more actions in a ready set of actions. Moreover, the method may include scheduling the ready set of actions, each of which is scheduled for execution and executed, the execution of each of the ready set of actions using a separate, concurrent thread, the concurrency of the actions controlled using a control mechanism. Related systems, apparatus, methods, and/or articles are also described. | 06-03-2010 |
20100211959 | ADAPTIVE CLUSTER TIMER MANAGER - Described herein are techniques for adaptively managing timers that are used in various layers of a node. In many cases, the number of timers that occur in the system is reduced by proactively and reactively adjusting values of the timers based on conditions affecting the system, thereby making such a system to perform significantly better and more resiliently than otherwise. | 08-19-2010 |
20100218195 | Software filtering in a transactional memory system - A method and apparatus for utilizing hardware mechanisms of a transactional memory system is herein described. Various embodiments relate to software-based filtering of operations from read and write barriers and read isolation barriers during transactional execution. Other embodiments relate to software-implemented read barrier processing to accelerate strong atomicity. Other embodiments are also described and claimed. | 08-26-2010 |
20100218196 | SYSTEM, METHODS AND APPARATUS FOR PROGRAM OPTIMIZATION FOR MULTI-THREADED PROCESSOR ARCHITECTURES - Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least two multi-stage execution units that allow for parallel execution of tasks. The first custom computing apparatus optimizes the code for parallelism, locality of operations and contiguity of memory accesses on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims. | 08-26-2010 |
20100229181 | NSMART SCHEDULING OF AUTOMATIC PARTITION MIGRATION BY THE USER OF TIMERS - Partition migrations are scheduled between virtual partitions of a virtually partitioned data processing system. The virtually partitioned data processing system is a tickless system in which a periodic timer interrupt is not guaranteed to be sent to the processor at a defined time interval. A request is received for a partition migration. Gaps between scheduled timer interrupts are identified. The partition migration is then scheduled to occur within the largest gap. | 09-09-2010 |
20110029985 | METHOD AND APPARATUS FOR COORDINATING RESOURCE ACCESS - An approach is provided for coordination resource access. A resource access coordinating application determines the conflict condition among a plurality of queries from a respective plurality of applications for access to an identical resource in an information space. The resource access coordinating application then orders the queries based on one or more characteristics (e.g., read, write, update, delete, read-only, read-update, write-update, write-add, write-add, etc.) of the queries irrespective of the applications. Thereafter, the resource access coordinating application selects one of the queries based on the order. | 02-03-2011 |
20110041137 | Methods And Apparatus For Concurrently Executing A Garbage Collection Process During Execution of A Primary Application Program - A wireless mobile communication device has an application program and a garbage collection program stored in memory. The garbage collection program is configured to identify a root set of referenced objects of the application program with use of a reference indicator array and to perform a mark and sweep process based on the root set of referenced objects. The reference indicator array has a plurality of reference indicators where each referenced indicator corresponding to a referenced object is set as referenced. The application program is configured to be executed during execution of a mark and sweep process of the garbage collection program, such that information received or provided via the user interface during the execution of the mark and sweep process is received or provided without suspension or delay. The application program has computer instructions which are based on an instruction set defined by a plurality of opcodes or native codes, including a single predefined opcode or a single predefined native code which is a “get object reference” instruction. Each “get object reference” instruction is associated with a target object and is defined to retrieve a reference from the target object and also set one of the reference indicators corresponding to the target object as referenced in the reference indicator array. | 02-17-2011 |
20110083136 | DISTRIBUTED PROCESSING SYSTEM - A distributed processing system for executing an application includes a processing element capable of performing parallel processing, a control unit, and a client that makes a request for execution of the application to the control unit. The processing element has, at least at the time of executing the application, one or more processing blocks that process respectively one or more tasks to be executed by the processing element, a processing block control section for calculating the number of parallel processes based on an index for controlling the number of parallel processes received from the control unit, a division section that divides data to be processed input to the processing blocks by the processing block control section in accordance with the number of parallel processes, and an integration section that integrates processed data output from the processing blocks by the processing block control section in accordance with the number of parallel processes. | 04-07-2011 |
20110119682 | METHODS AND APPARATUS FOR MEASURING PERFORMANCE OF A MULTI-THREAD PROCESSOR - Disclosed are methods and apparatus for measuring performance of a multi-thread processor. The method and apparatus determine loading of a multi-thread processor through execution of an idle task in individual threads of the multi-thread processor during predetermined time periods. The idle task is configured to loop and run when no other task is running on the threads. Loop executions of the idle task on each thread are counted over each of the predetermined time periods. From these counts, loading of each of the threads of the multi-thread processor may then be determined. The loading may be used to develop a processor profile that may then be displayed in real-time. | 05-19-2011 |
20110131586 | Method and System for Efficiently Sharing Array Entries in a Multiprocessing Environment - A method and a system efficiently and effectively share array entries among multiple threads of execution in a multiprocessor computer system. The invention comprises a method and an apparatus for array creation, a method and an apparatus for array entry data retrieval, a method and an apparatus for array entry data release, a method and an apparatus for array entry data modification, a method and an apparatus for array entry data modification release, a method and an apparatus for multiple array entry atomic release-and-renew, a method and an apparatus for array destruction, a method and an apparatus for specification of array entry discard strategy, a method and an apparatus for specification of array entry modification update strategy, and finally a method and an apparatus for specification of user-provided array entry data construction method. | 06-02-2011 |
20110138398 | LOCK RESOLUTION FOR DISTRIBUTED DURABLE INSTANCES - The present invention extends to methods, systems, and computer program products for resolving lock conflicts. For a state persistence system, embodiments of the invention can employ a logical lock clock for each persisted state storage location. Lock times can be incorporated into bookkeeping performed by a command processor to distinguish cases where the instance is locked by the application host at a previous logical time from cases where the instance is concurrently locked by the application host through a different name. A logical command clock is also maintained for commands issued by the application host to a state persistence system, with introspection to determine which issued commands may potentially take a lock. The command processor can resolve conflicts by pausing command execution until the effects of potentially conflicting locking commands become visible and examining the lock time to distinguish among copies of a persisted state storage location. | 06-09-2011 |
20110145834 | CODE EXECUTION UTILIZING SINGLE OR MULTIPLE THREADS - A program is executed utilizing a main hardware thread. During execution, an instruction specifies to execute a portion utilizing a worker hardware thread. If a processor state indicator is set to multi-threaded, the specified portion is executed utilizing the worker hardware thread. However, if the processor state indicator is set to single-threaded, the specified portion is executed utilizing the main hardware thread as a subroutine. The main hardware thread may pass parameter data to the worker hardware thread by copying the parameter data register or memory location for the main hardware thread to an equivalent parameter data register or memory location for the worker hardware thread. Similarly, the worker hardware thread may pass return values to the main hardware thread by copying a return value register or memory location for the worker hardware thread to an equivalent return value register or memory location for the main hardware thread. | 06-16-2011 |
20110161981 | USING PER TASK TIME SLICE INFORMATION TO IMPROVE DYNAMIC PERFORMANCE STATE SELECTION - Methods and apparatus for using per task time slice information to improve dynamic performance state selection are described. In one embodiment, a new performance state is selected for a process based on one or more previous execution time slice values of the process. Other embodiments are also described. | 06-30-2011 |
20110173630 | Central Repository for Wake-and-Go Mechanism - A wake-and-go mechanism is provided with a central repository wake-and-go array for a multiple processor data processing system. The wake-and-go mechanism recognizes a programming idiom that indicates that a thread running on a processor within the multiple processor data processing system is waiting for an event. The wake-and-go mechanism updates a central repository wake-and-go array with a target address associated with the event. Each entry in the central repository wake-and-go array may include a thread identification (ID), a central processing unit (CPU) ID, the target address, the expected data, a comparison type, a lock bit, a priority, and a thread state pointer, which is the address at which the thread state information is stored. | 07-14-2011 |
20110197199 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND COMPUTER READABLE MEDIUM - An information processing apparatus includes: a reliability determination unit that determines reliability required for processing a processing target based on the processing target; a processing determination unit that makes a comparison between the reliability determined by the reliability determination unit and reliability of a processing main body and determines whether or not the processing main body can be caused to process the processing target; a processing target change unit that changes the processing target so as to change the reliability of the processing target if the processing determination unit determines that the processing main body cannot be caused to process the processing target; and a processing request unit that requests the processing main body to process the processing target changed by the processing target change unit. | 08-11-2011 |
20110202931 | METHOD, COMPUTER PROGRAM AND DEVICE FOR SUPERVISING A SCHEDULER FOR MANAGING THE SHARING OF PROCESSING TIME IN A MULTI-TASK COMPUTER SYSTEM - The invention in particular has as an object supervising a scheduler for the management of processing time sharing in a multitask data-processing system comprising a computation unit having a standard execution mode and a preferred execution mode for executing a plurality of applications. The execution time for the said plurality of applications is divided into a plurality of periods and a minimal time for access per period to the said computation unit is determined for at least one application of the said plurality of applications. For at least one period, the said preferred execution mode is associated with the said at least one application and the said at least one application is executed according to at least the said minimal time for access to the said computation unit. For the said at least one period, the said standard execution mode is associated with the applications of the said plurality of applications and at least any one of the applications of the said plurality of applications is executed. | 08-18-2011 |
20110252430 | Opportunistic Multitasking - Services for a personal electronic device are provided through which a form of background processing or multitasking is supported. The disclosed services permit user applications to take advantage of background processing without significant negative consequences to a user's experience of the foreground process or the personal electronic device's power resources. To effect the disclosed multitasking, one or more of a number of operational restrictions may be enforced. By way of example, an application that may normally be placed into the background state may instead be terminated if it controls a lock on a shared system resource. | 10-13-2011 |
20110283294 | DETERMINING MULTI-PROGRAMMING LEVEL USING DIMINISHING-INTERVAL SEARCH - A method of determining a multiprogramming level (MPL) for a first computer subsystem may be implemented on a second computer subsystem. The method may include selecting an initial MPL interval having endpoints that bound a local extremum of a computer-system operation variable that is a unimodal function of the MPL. For each interval having a length more than a threshold, operation-variable values for two intermediate MPLs in the interval may be determined. The interval may be diminished by the section of the interval between the one of the intermediate MPLs having an operation-variable value further from the extremum, and the interval endpoint adjacent to the one intermediate MPL. The operating MPL may be set equal to the other intermediate MPL when the interval has a length that is not more than the threshold. | 11-17-2011 |
20110321059 | STACK OVERFLOW PREVENTION IN PARALLEL EXECUTION RUNTIME - A parallel execution runtime prevents stack overflow by maintaining an inline counter for each thread executing tasks of a process. Each time that the runtime determines that inline execution of a task is desired on a thread, the runtime determines whether the inline counter for the corresponding thread indicates that stack overflow may occur. If not, the runtime increments the inline counter for the thread and allows the task to be executed inline. If the inline counter indicates a risk of stack overflow, then the runtime performs additional one or more checks using a previous stack pointer of the stack (i.e., a lowest known safe watermark), the current stack pointer, and memory boundaries of the stack. If the risk of stack overflow remains after all checks have been performed, the runtime prevents inline execution of the task. | 12-29-2011 |
20120005687 | SYSTEM ACTIVATION METHOD IN MULTI-TASK SYSTEM - When a multi-task system is powered on, the following steps are respectively executed: a first step in which hardware components are initialized; a second step in which sections are initialized; and a third step in which an operating system is initialized. In the third step, a task/object is statically generated when an initial access time of the task/object is at most a predefined threshold value but the task/object is dynamically generated after activation of the multi-task system is completed when the initial access time of the task/object is larger than the predefined threshold value. | 01-05-2012 |
20120042324 | Memory management method and device in a multitasking capable data processing system - A method for memory space management in a multitasking capable data processing system including a data processing device and software running thereon. The data processing device includes at least one central processing unit (CPU) and at least one user memory, and the software running on the CPU includes a first computer program application and at least a second computer program application which respectively jointly access the user memory used by both computer program applications during execution. Information of the first computer program application is stored in at least a portion of the memory space of the user memory in a temporary manner, and the integrity of the contents memory space is checked after interrupting the execution of the first computer program application. The first computer program application is only executed further when the memory integrity is confirmed through the checking or when the memory integrity has been reestablished. | 02-16-2012 |
20120047515 | TERMINAL DEVICE, COMMUNICATION METHOD USED IN THE TERMINAL DEVICE AND RECORDING MEDIUM - The present invention relates to a terminal device having an operation system and is capable of using a first application program for use in real time communication and a second application program for another purpose simultaneously on the operation system, the terminal device is characterized by being provided with a means for setting interval between system calls which calculates a frequency of system call executions when the issuance of the system call to the operation system by the second application program is simultaneously executed during the real time communication by the first application program, and when the execution frequency has exceeded a predetermined threshold, sets an execution interval time between the system calls to a given length of time or more. | 02-23-2012 |
20120096474 | Systems and Methods for Performing Multi-Program General Purpose Shader Kickoff - Systems and methods for thread group kickoff and thread synchronization are described. One method is directed to synchronizing a plurality of threads in a general purpose shader in a graphics processor. The method comprises determining an entry point for execution of the threads in the general purpose shader, performing a fork operation at the entry point, whereby the plurality of threads are dispatched, wherein the plurality of threads comprise a main thread and one or more sub-threads. The method further comprises performing a join operation whereby the plurality of threads are synchronized upon the main thread reaching a synchronization point. Upon completion of the join operation, a second fork operation is performed to resume parallel execution of the plurality of threads. | 04-19-2012 |
20120167114 | PROCESSOR - Provide is a processor that can maintain a dependency relationship between a plurality of instructions and one read instruction. The processor comprises: a setting unit configured to set, when an instruction that exists at a location ensuring that writing into a memory area has been completed is executed, usage information indicating whether writing into the memory area has been completed such that the usage information indicates that writing into a memory area during execution of one thread has been completed; and a control unit configured to (i) perform execution of a read instruction to read data stored in the memory area when the usage information indicates that writing into the memory area during execution of the one thread has been completed, and (ii) suppress execution of the read instruction when the usage information indicates that writing into the memory area during execution of the one thread has not been completed. | 06-28-2012 |
20120210332 | ASYNCHRONOUS PROGRAMMING EXECUTION - One or more techniques and/or systems are disclosed for improving asynchronous programming execution at runtime. Asynchronous programming code can comprise more than one level of hierarchy, such as in an execution plan. Respective aggregation operations in a portion of the asynchronous programming code are unrolled, to create a single level iterative execution, by combining elements of the multi-level iterative execution of the asynchronous programming code. In this way, the aggregation operations are concatenated to local logic code for the aggregation operations. Thread context switching in the unrolled portion of asynchronous programming code is performed merely at an asynchronous operation, thereby mitigating unnecessary switches. Exceptions thrown during programming code can be propagated up to a top of a virtual callstack for the execution. | 08-16-2012 |
20120240132 | CONTROL APPARATUS, SYSTEM PROGRAM, AND RECORDING MEDIUM - A control apparatus capable of updating a user program while processing is being performed in a multitasking manner is provided. A processor includes a memory that stores a user program containing a program organization unit as well as a central processing unit executing a task containing the user program and also updating the program organization unit stored in the memory. The central processing unit is configured to execute a plurality of tasks concurrently and to execute each task with a period corresponding to the task. Moreover, the central processing unit is configured to update the program organization unit stored in the memory during the period of time from when a plurality of tasks to be executed have been finished until when the plurality of tasks are executed again. | 09-20-2012 |
20120246662 | Automatic Verification of Determinism for Parallel Programs - Automatic verification of deteiminism in structured parallel programs includes sequentially establishing whether code for each of a plurality of tasks of the structured parallel program is independent, outputting sequential proofs corresponding to the independence of the code for each of the plurality of tasks and deteimining whether all memory locations accessed by parallel tasks of the plurality of tasks are independent based on the sequential proofs. | 09-27-2012 |
20120254888 | PIPELINED LOOP PARALLELIZATION WITH PRE-COMPUTATIONS - Embodiments of the invention provide systems and methods for automatically parallelizing loops with non-speculative pipelined execution of chunks of iterations with pre-computation of selected values. Non-DOALL loops are identified and divided the loops into chunks. The chunks are assigned to separate logical threads, which may be further assigned to hardware threads. As a thread performs its runtime computations, subsequent threads attempt to pre-compute their respective chunks of the loop. These pre-computations may result in a set of assumed initial values and pre-computed final variable values associated with each chunk. As subsequent pre-computed chunks are reached at runtime, those assumed initial values can be verified to determine whether to proceed with runtime computation of the chunk or to avoid runtime execution and instead use the pre-computed final variable values. | 10-04-2012 |
20120254889 | Application Programming Interface for Managing Time Sharing Option Address Space - A method includes receiving a start request from a client at a launcher application programming interface (API), determining whether an existing time sharing option (TSO) address space associated with a user of the client is available, retrieving security environment data associated with the user from a security product responsive to determining that no existing TSO address space associated with a user of the client is available, saving the retrieved security environment data as a security object, generating a message queue, generating a terminal status block (TSB) and saving the terminal status block, creating a TSO address space in a processor, sending an instruction to an operating system to start the TSO address space, and sending a message queue identifier associated with the message queue and an address space token associated with the TSO address space to the client. | 10-04-2012 |
20120297397 | IMAGE FORMING APPARATUS, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM AND STORAGE MEDIUM - An image forming apparatus has a plurality of application execution environments, and includes a control part in each of the application execution environments, configured to control an application executed in a corresponding application execution environment. The control part in an application execution environment controls an application executed in an other application execution environment via the control part of the other application execution environment. | 11-22-2012 |
20120304196 | ELECTRONIC DEVICE WORKSPACE RESTRICTION - Some embodiments include a method that includes receiving an indication of a first of a plurality of tasks. The method includes accessing a policy associated with the first of the plurality of tasks. The method also includes determining that a restricted activity state is to be imposed on an electronic device workspace based on the policy that is associated with the first of the plurality of tasks and an application related activity. The application related activity comprises at least one of accumulation of a first time period of a user working with the first set of one or more applications and expiration of a lack of activity second time period for the second set of one or more applications. The method includes restricting the electronic device workspace to the first set of one or more applications. | 11-29-2012 |
20120311604 | DETERMINISTIC PARALLELIZATION THROUGH ATOMIC TASK COMPUTATION - A method for deterministic locking in a parallel computing environment is provided. The method includes creating a data structure in memory of a computer for a shared resource. The data structure encapsulates a reference to an owner of a lock for the shared resource and a queue of threads able to seek exclusive access to the shared resource. The queue in turn includes different entries, each entry including an identifier for a corresponding one of the threads and a deterministic time computed for the corresponding one of the threads from a count of memory accesses occurring in the corresponding one of the threads. Consequently, a thread can be selected from the queue to receive ownership of the lock and exclusive access to the shared resource based upon a deterministic time for the selected thread as compared to other deterministic times for others of the threads in the queue, for example, a lowest deterministic time. | 12-06-2012 |
20120311605 | PROCESSOR CORE POWER MANAGEMENT TAKING INTO ACCOUNT THREAD LOCK CONTENTION - A method maintains, for each processing element in a processor, a count of threads waiting in a data structure for hand-off locks in order to execute on the processing element. The method maintains the processing element in a first power state if the count of threads waiting for hand-off locks is greater than zero. The method puts the processing element in a second power state if the count of threads waiting for hand-off locks is equal to zero and no thread is ready to be processed by the processing element. The method returns the processing element to the first power state if the count of threads becomes greater than zero, or if a thread becomes ready to be processed by the processing element. | 12-06-2012 |
20120311606 | System and Method for Implementing Hierarchical Queue-Based Locks Using Flat Combining - The system and methods described herein may be used to implement a scalable, hierarchal, queue-based lock using flat combining. A thread executing on a processor core in a cluster of cores that share a memory may post a request to acquire a shared lock in a node of a publication list for the cluster using a non-atomic operation. A combiner thread may build an ordered (logical) local request queue that includes its own node and nodes of other threads (in the cluster) that include lock requests. The combiner thread may splice the local request queue into a (logical) global request queue for the shared lock as a sub-queue. A thread whose request has been posted in a node that has been combined into a local sub-queue and spliced into the global request queue may spin on a lock ownership indicator in its node until it is granted the shared lock. | 12-06-2012 |
20120311607 | DETERMINISTIC PARALLELIZATION THROUGH ATOMIC TASK COMPUTATION - A method for deterministic locking in a parallel computing environment is provided. The method includes creating a data structure in memory of a computer for a shared resource. The data structure encapsulates a reference to an owner of a lock for the shared resource and a queue of threads able to seek exclusive access to the shared resource. The queue in turn includes different entries, each entry including an identifier for a corresponding one of the threads and a deterministic time computed for the corresponding one of the threads from a count of memory accesses occurring in the corresponding one of the threads. Consequently, a thread can be selected from the queue to receive ownership of the lock and exclusive access to the shared resource based upon a deterministic time for the selected thread as compared to other deterministic times for others of the threads in the queue, for example, a lowest deterministic time. | 12-06-2012 |
20120311608 | METHOD AND APPARATUS FOR PROVIDING MULTI-TASKING INTERFACE - A method and an apparatus for providing a multi-tasking interface of a device such as a portable communication device are provided. The method for providing a multi-tasking interface of a terminal preferably includes: receiving background switch input switching an display of an application being executed in a foreground to a background; switching the display of the application to the background when the background switch input is received; displaying a background control interface; and switching the display of the application to the foreground when preset switch input is received through the background control interface. | 12-06-2012 |
20120324473 | Effective Management Of Blocked-Tasks In Preemptible Read-Copy Update - A technique for managing read-copy update readers that have been preempted while executing in a read-copy update read-side critical section. A single blocked-tasks list is used to track preempted reader tasks that are blocking an asynchronous grace period, preempted reader tasks that are blocking an expedited grace period, and preempted reader tasks that require priority boosting. In example embodiments, a first pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking a current asynchronous grace period. A second pointer may be used to segregate the blocked-tasks list into preempted reader tasks that are and are not blocking an expedited grace period. A third pointer may be used to segregate the blocked-tasks list into preempted reader tasks that do and do not require priority boosting. | 12-20-2012 |
20130007765 | SOFTWARE CONTROL DEVICE, SOFTWARE CONTROL METHOD, AND COMPUTER PRODUCT - A software control device includes a processor configured to determine whether starting software and running software are accessing the same common resource; and control the running software to be temporarily suspended upon determining that the starting software and the running software are accessing the same common resource. | 01-03-2013 |
20130081053 | Acquiring and transmitting tasks and subtasks to interface devices - A computationally implemented method includes receiving request data including a request to carry out a task of acquiring data, acquiring one or more subtasks related to the task of acquiring data, selecting two or more discrete interface devices based on at least one of a status of the two or more discrete interface devices and a characteristic of the two or more discrete interface devices, transmitting at least one of the one or more subtasks to at least two of the two or more discrete interface devices, and receiving result data corresponding to a result of at least one subtask of the one or more subtasks executed by at least one of the two or more discrete interface devices. In addition to the foregoing, other method aspects are described in the claims, drawings, and text. | 03-28-2013 |
20130081054 | Method for Enabling Sequential, Non-Blocking Processing of Statements in Concurrent Tasks in a Control Device - A method for enabling sequential, non-blocking processing of statements in concurrent tasks in a control device having an operating system capable of multi-tasking, in particular a programmable logic controller, is disclosed. At least one operating system call, which causes the operating system to interrupt the particular task according to an instruction output by the statement in favor of another task, is associated with at least one statement. | 03-28-2013 |
20130104144 | Application Switching in a Graphical Operating System - A method for application switching in an operating system may be provided. The method may comprise providing at least two active applications on the operating system, and providing a first list of actions related to the first active application, via a first interface, to an application switching manager, and providing a second list of actions related to the second active application, via a second interface, to the application switching manager. Additionally, the method may further comprise selecting an active application out of the at least two active applications together with selecting an action selected from the first list of actions for a first application or a second action for the second list for a second application using a graphical user interface. | 04-25-2013 |
20130111497 | Staggering Execution of Scheduled Tasks Based on Behavioral Information | 05-02-2013 |
20130152104 | HANDLING OF SYNCHRONOUS OPERATIONS REALIZED BY MEANS OF ASYNCHRONOUS OPERATIONS - The present invention extends to methods, systems, and computer program products for handling synchronous operations by means of asynchronous operations. Upon completion of an asynchronous operation, a state flag is accessed. The state flag indicates whether or not a sync-over-async wrapper/adapter requested execution of the asynchronous operation. The sync-over-async wrapper/adapter is currently blocked awaiting notice of completion of the asynchronous operation. Based on the state flag, results of the asynchronous operation are stored at a location accessible by the sync-over-async wrapper. A completion signal is sent to the sync-over-async wrapper. | 06-13-2013 |
20130174179 | MULTITASKING METHOD AND APPARATUS OF USER DEVICE - A multitasking method and apparatus of a user device is provided for intuitively and swiftly switching between background and foreground tasks running on the user device. The multitasking method includes receiving an interaction to request for task-switching in a state where an execution screen of a certain application is displayed, displaying a stack of tasks that are currently running, switching a task selected from the stack to a foreground task, and presenting an execution window of the foreground task. | 07-04-2013 |
20130179896 | Multi-thread processing of an XML document - An indication to process an Extensible Markup Language (XML) document that includes a hierarchy of nodes is received. A set of one or more page nodes to be processed is obtained, where the set of page nodes are part of the hierarchy of nodes. A plurality of threads is created. One of the set of page nodes and those nodes, if any, in the hierarchy that descend from that node are assigned to one of the plurality of threads to be processed by that thread. Processing, by said one of the plurality of threads, of the assigned page node and those nodes that descend from that page node is initiated. | 07-11-2013 |
20130205304 | APPARATUS AND METHOD FOR PERFORMING MULTI-TASKING IN PORTABLE TERMINAL - A multi-tasking execution apparatus and a method for easily controlling applications running in a portable terminal are provided. The apparatus includes a display and a controller. The display displays an application-containing image in which at least one specific image representing at least one application running in a background is contained and arranged. The controller operatively displays at least one specific image representing at least one application running in the background, so as to be contained in the application-containing image, and controls the at least one application running in the background by controlling the specific image based on a specific gesture. | 08-08-2013 |
20130247069 | Creating A Checkpoint Of A Parallel Application Executing In A Parallel Computer That Supports Computer Hardware Accelerated Barrier Operations - In a parallel computer executing a parallel application, where the parallel computer includes a number of compute nodes, with each compute node including one or more computer processors, the parallel application including a number of processes, and one or more of the processes executing a barrier operation, creating a checkpoint of a parallel application includes: maintaining, by each computer processor, global barrier operation state information, the global barrier operation state information includes an aggregation of each process's barrier operation state information; invoking, for each process of the parallel application, a checkpoint handler; saving, by each process's checkpoint handler as part of a checkpoint for the parallel application, the process's barrier operation state information; and exiting, by each process, the checkpoint handler. | 09-19-2013 |
20130263152 | SYSTEM FOR SCHEDULING THE EXECUTION OF TASKS BASED ON LOGICAL TIME VECTORS - A comparator unit for two Nm-bit data words, comprises a comparison output indicative of an order relation between the two data words, the function of the comparison unit being represented by a logic table comprising rows associated with the possible consecutive values of the first data word and columns associated with the possible consecutive values of the second data word, where each row includes a one at the intersection with the column associated with the same value as the row, followed by a series of zeros. The series of zeros is followed by a series of ones completing the row circularly, the number of zeros being the same for each row and smaller than half of the maximum value of the data words. | 10-03-2013 |
20130283290 | POWER-EFFICIENT INTERACTION BETWEEN MULTIPLE PROCESSORS - A technique for processing instructions in an electronic system is provided. In one embodiment, a processor of the electronic system may submit a unit of work to a queue accessible by a coprocessor, such as a graphics processing unit. The coprocessor may process work from the queue, and write a completion record into a memory accessible by the processor. The electronic system may be configured to switch between a polling mode and an interrupt mode based on progress made by the coprocessor in processing the work. In one embodiment, the processor may switch from an interrupt mode to a polling mode upon completion of a threshold amount of work by the coprocessor. Various additional methods, systems, and computer program products are also provided. | 10-24-2013 |
20130326538 | SYSTEM AND METHOD FOR SHARED EXECUTION OF MIXED DATA FLOWS - A method, computer program product, and computer system for shared execution of mixed data flows, performed by one or more computing devices, comprises identifying one or more resource sharing opportunities across a plurality of parallel tasks. The plurality of parallel tasks includes zero or more relational operations and at least one non-relational operation. The plurality of parallel tasks relative to the relational operations and the at least one non-relational operation are executed. In response to executing the plurality of parallel tasks, one or more resources of the identified resource sharing opportunities is shared across the relational operations and the at least one non-relational operation. | 12-05-2013 |
20130347002 | PERFORMANT RUNTIME PAUSE WITH NO CPU UTILIZATION - Some computing devices have limited resources such as, for example, battery power. When a user ceases to interact with an application, execution of the application can be moved to background and the application can be paused. During the time period in which the application is paused, the application consumes no CPU cycles because executing managed threads of the paused application are stopped, and native threads are prevented from running using asynchronous procedure calls. | 12-26-2013 |
20140007135 | MULTI-CORE SYSTEM, SCHEDULING METHOD, AND COMPUTER PRODUCT | 01-02-2014 |
20140026149 | SYSTEM AND METHOD FOR EXECUTION TIME DONATION IN A TIME-PARTITIONING SCHEDULER - A system and method donates time from a first process to a second process. The method includes determining a time slice for each of a plurality of processes to generate a schedule therefrom. The method includes determining a time donation scheme for the first process, the time donation scheme indicative of a donation policy in which the execution time of the first process is donated to the second process. During execution of the processes, the method includes receiving a request from the first process for a time donation to the second process and executing the second process during the time slice of the first process. | 01-23-2014 |
20140026150 | PARALLEL PROCESSING SYSTEM - Software development tools and techniques for configuring parallel processing systems to execute software modules implementing processes for solving complex problems, including over-the-counter trading processes and foreign exchange trading processes, to execute quickly and efficiently. The parallel processing system may include low-cost, consumer-grade multicore processing units. A process for solving a complex problem may be divided into software modules, including by evaluating the process to determine discrete processing steps that produce an intermediate result on which later steps of the process depend. The software modules created for a process may form a template processing chain describing multiple processing chains of the process that are to be executed. A software development tool for producing configuration information for multicore processing units may evaluate the software modules and the processing chains to determine whether the modules will execute quickly and efficiently on the multicore processing units of the parallel processing system. | 01-23-2014 |
20140040915 | OPEN STATION CANONICAL OPERATOR FOR DATA STREAM PROCESSING - Customizing functions performed by data flow operators when processing data streams. An open-executor(s) is provided as part of the data stream analytics platform, wherein such open-executor allows for both of: 1) customizing user plug-ins for the operators, to accommodate changes in user requirements; and 2) predefining templates that are based on specific meta-properties of various operators and that are common therebetween. | 02-06-2014 |
20140047455 | DETERMINISTIC PARALLELIZATION THROUGH ATOMIC TASK COMPUTATION - A method for deterministic locking in a parallel computing environment is provided. The method includes creating a data structure in memory of a computer for a shared resource. The data structure encapsulates a reference to an owner of a lock for the shared resource and a queue of threads able to seek exclusive access to the shared resource. The queue in turn includes different entries, each entry including an identifier for a corresponding one of the threads and a deterministic time computed for the corresponding one of the threads from a count of memory accesses occurring in the corresponding one of the threads. Consequently, a thread can be selected from the queue to receive ownership of the lock and exclusive access to the shared resource based upon a deterministic time for the selected thread as compared to other deterministic times for others of the threads in the queue, for example, a lowest deterministic time. | 02-13-2014 |
20140053164 | Region-Weighted Accounting of Multi-Threaded Processor Core According to Dispatch State - According to one embodiment of the present disclosure, an approach is provided in which a thread is selected from multiple active threads, along with a corresponding weighting value. Computational logic determines whether one of the multiple threads is dispatching an instruction and, if so, computes a dispatch weighting value using the selected weighting value and a dispatch factor that indicates a weighting adjustment of the selected weighting value. In turn, a resource utilization value of the selected thread is computed using the dispatch weighting value. | 02-20-2014 |
20140089938 | MULTI-THREAD PROCESSOR AND ITS HARDWARE THREAD SCHEDULING METHOD - A multi-thread processor including a plurality of hardware threads each of which generates an independent instruction flow, a thread scheduler that outputs a thread selection signal in accordance with a schedule, the thread selection signal designating a hardware thread to be executed in a next execution cycle among the plurality of hardware threads, and a first selector that selects one of the plurality of hardware threads according to the thread selection signal and outputs an instruction generated by the selected hardware thread. The thread scheduler specifies execution of at least one hardware thread pre-selected among the plurality of hardware threads in a predetermined first execution period, and specifies execution of a variably selected hardware thread in a second execution period other than the first execution period. A time ratio between the predetermined first execution period and the second execution period is set according to processing requests. | 03-27-2014 |
20140089939 | Resolving RCU-Scheduler Deadlocks - A technique for resolving deadlocks between an RCU subsystem and an operating system scheduler. An RCU reader manipulates a counter when entering and exiting an RCU read-side critical section. At the entry, the counter is incremented. At the exit, the counter is manipulated differently depending on the counter value. A first counter manipulation path is taken when the counter indicates a task-context RCU reader is exiting an outermost RCU read-side critical section. This path includes condition-based processing that may result in invocation of the operating system scheduler. The first path further includes a deadlock protection operation that manipulates the counter to prevent an intervening RCU reader from taking the same path. The second manipulation path is taken when the counter value indicates a task-context RCU reader is exiting a non-outermost RCU read-side critical section, or an RCU reader is nested within the first path. This path bypasses the condition-based processing. | 03-27-2014 |
20140181837 | DYNAMICALLY MANAGING DISTRIBUTION OF DATA AND COMPUTATION ACROSS CORES FOR SEQUENTIAL PROGRAMS - Technologies are generally provided for dynamically managing execution of sequential programs in a multi-core processing environment by dynamically hosting the data for the different dynamic program phases in the local caches of different cores. This may be achieved through monitoring data access patterns of a sequential program initially executed on a single core. Based on such monitoring, data identified as being accessed by different program phases may be sent to be stored in the local caches of different cores. The computation may then be moved from core to core based on which data is being accessed, when the program changes phase. Program performance may thus be enhanced by reducing local cache miss rates, proactively reducing the possibility of thermal hotspots, as well as by utilizing otherwise idle hardware. | 06-26-2014 |
20140181838 | Cancellable Command Application Programming Interface (API) Framework - Embodiments are provided that include the use of a cancellable command application programming interface (API) framework that provides cooperative multitasking for synchronous and asynchronous operations based in part on a command timing sequence and a cancellable command API definition. A method of an embodiment enables a user or programmer to use a cancellable command API definition as part of implementing a responsive application interface using a command timing sequence to control execution of active tasks. A cancellable command API framework of an embodiment includes a command block including a command function, a task engine to monitor the command function, and a timer component to control execution of asynchronous and synchronous tasks based in part on first and second control timing intervals associated with a command timing sequence. Other embodiments are also disclosed. | 06-26-2014 |
20140181839 | CAPACITY-BASED MULTI-TASK SCHEDULING METHOD, APPARATUS AND SYSTEM - The present disclosure is applied to the technical field of data processing, and provided are a capacity-based multi-task scheduling method, apparatus and system. The method comprises: a scheduling node receiving a request for acquiring a task sent by a task executing node, the request carrying with a current load value and an available memory space of the task executing node; and the scheduling node deciding whether the current load value is less than a threshold, and carrying out task scheduling for the task executing node according to the available memory space of the task executing node if the current load value is less than the threshold. The present disclosure can effectively avoid the problems of overload, load, in sufficient memory, etc. of the task execution node, and increase the resource utilization rate of the task execution node and the task scheduling and executing efficiency. | 06-26-2014 |
20140259026 | METHOD AND DEVICE FOR EXECUTING MULTI-TASK BY MICROCONTROLLER OF ELECTRONIC CIGARETTE - The present invention discloses method and device for executing multi-tasks by a microcontroller of an electronic cigarette. The method includes these steps: determining tasks to be executed by the microcontroller and an allowed time interval between two executions of each of the tasks; dividing executing time of each task into a plurality of time slices in orderly, and making the sum of the time slices of each task to be less than or equal to the minimum value of all of the time interval; setting a status bit for each task, and directing the status bit to the time slice of the task; executing each task according to a time slice corresponding to the current status bit of the task, and switching to a next task while the time slice corresponding to the current status bit ends. | 09-11-2014 |
20140282603 | METHOD AND APPARATUS FOR DETECTING A COLLISION BETWEEN MULTIPLE THREADS OF EXECUTION FOR ACCESSING A MEMORY ARRAY - A method includes determining, for a first thread of execution, a first speculative decoded operands signal and determining, for a second thread of execution, a second speculative decoded operands signal. The method further includes determining, for the first thread of execution, a first constant and determining, for the second thread of execution, a second constant. The method further compares the first speculative decoded operands signal to the second speculative decoded operands signal and uses the first and second constant to detect a wordline collision for accessing the memory array. | 09-18-2014 |
20140282604 | QUALIFIED CHECKPOINTING OF DATA FLOWS IN A PROCESSING ENVIRONMENT - Techniques are disclosed for qualified checkpointing of a data flow model having data flow operators and links connecting the data flow operators. A link of the data flow model is selected based on a set of checkpoint criteria. A checkpoint is generated for the selected link. The checkpoint is selected from different checkpoint types. The generated checkpoint is assigned to the selected link. The data flow model, having at least one link with no assigned checkpoint, is executed. | 09-18-2014 |
20140282605 | QUALIFIED CHECKPOINTING OF DATA FLOWS IN A PROCESSING ENVIRONMENT - Techniques are disclosed for qualified checkpointing of a data flow model having data flow operators and links connecting the data flow operators. A link of the data flow model is selected based on a set of checkpoint criteria. A checkpoint is generated for the selected link. The checkpoint is selected from different checkpoint types. The generated checkpoint is assigned to the selected link. The data flow model, having at least one link with no assigned checkpoint, is executed. | 09-18-2014 |
20140298351 | PARALLEL OPERATION METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus assigns the calculation of a first submatrix included in a matrix including zero elements and non-zero elements to a first thread and the calculation of a second submatrix included in the matrix to a second thread. The information processing apparatus compares the distribution of non-zero elements in the rows or columns of the first submatrix with the distribution of non-zero elements in the rows or columns of the second submatrix. The information processing apparatus determines allocation of storage areas for storing vectors to be respectively used in the calculations by the first and second threads, according to the result of the comparison. | 10-02-2014 |
20140310723 | DATA PROCESSING APPARATUS, TRANSMITTING APPARATUS, TRANSMISSION CONTROL METHOD, SCHEDULING METHOD, AND COMPUTER PRODUCT - A data processing apparatus includes a processor configured to receive an interrupt request that is a trigger for execution of an interrupt process executed by the processor; store the received interrupt request to a recording area; calculate based on a time when the interrupt request is received and particular time information read from the recording area, a predicted time when a subsequent interrupt request is to be received; detect a thread to be executed by the processor, among executable threads of the processor; judge based on the calculated predicted time and a current time, whether there is a possibility of the interrupt process being executed while the detected thread is under execution; decide based on a judgment result, whether to execute the detected thread on the processor; and execute the detected thread on the processor, based on a decision result. | 10-16-2014 |
20140337855 | Termination of Requests in a Distributed Coprocessor System - A system and method of terminating processing requests dispatched to a coprocessor hardware accelerator in a multi-processor computer system based on matching various fields in the request made to the coprocessor to identify the process to be terminated. A kill command is initiated by a write operation to a coprocessor block kill register and has match enable and value for each field in the coprocessor request to be terminated. Enabled fields may have one or more values associated with a single request or multiple requests for the same coprocessor. At least one match enable must be set to initiate a kill request. A process kill active signal prevents other coprocessor jobs from moving between operational stages in the coprocessor hardware accelerator. Processing jobs that are idle or do not match the fields with match enables set signal done with no match and continue processing. Processing jobs that do match the fields with match enables set are terminated and signal done with match. When all processing jobs have signaled done, a done bit is set in the coprocessor block kill register to indicate completion of the kill to the initiating software. The register also holds the match status of each processing job. | 11-13-2014 |
20140337856 | DATA PARALLEL PROCESSING APPARATUS WITH MULTI-PROCESSOR AND METHOD THEREOF - The present invention suggests a data parallel processing device that performs parallel processing on input data by varying a flow ID generating manner depending on a loading degree of the processor in the multi-processor structure configured by processor array. The suggested device includes a flow ID generating unit which generates a flow ID for input data which is differentiated in accordance with a status of a buffer; a data allocating unit which allocates data having the same flow ID to a specified processor; and a data processing unit which sequentially processes data allocated to each processor so that the parallel processing performance is improved as compared with the related art. | 11-13-2014 |
20140337857 | Synchronizing Multiple Threads Efficiently - In one embodiment, the present invention includes a method of assigning a location within a shared variable for each of multiple threads and writing a value to a corresponding location to indicate that the corresponding thread has reached a barrier. In such manner, when all the threads have reached the barrier, synchronization is established. In some embodiments, the shared variable may be stored in a cache accessible by the multiple threads. Other embodiments are described and claimed. | 11-13-2014 |
20140344831 | EXCLUDING COUNTS ON SOFTWARE THREADS IN A STATE - The present disclosure provides a method, computer program product, and system for compensating for event counts for a thread occurring during targeted states on the thread. In example embodiments, the state is a spin loop state and instructions completed during the spin loop are eliminated from a performance report and are presented in the absence of the spin loop. In another embodiment, the event counts are interrupt counts eliminated during the spin loop. | 11-20-2014 |
20140351826 | APPLICATION PROGRAMMING INTERFACE TO ENABLE THE CONSTRUCTION OF PIPELINE PARALLEL PROGRAMS - An application programming interface (API) provides various software constructs that allow a developer to assemble a processing pipeline having arbitrary structure and complexity. Once assembled, the processing pipeline is configured to include a set of interconnected pipestages. Those pipestages are associated with one or more different CTAs that may execute in parallel with one another on a parallel processing unit. The developer specifies the configuration of the pipestages, including the configuration of the different CTAs across all pipestages, as well as the different processing operations performed by each different CTA. | 11-27-2014 |
20140351827 | APPLICATION PROGRAMMING INTERFACE TO ENABLE THE CONSTRUCTION OF PIPELINE PARALLEL PROGRAMS - An application programming interface (API) provides various software constructs that allow a developer to assemble a processing pipeline having arbitrary structure and complexity. Once assembled, the processing pipeline is configured to include a set of interconnected pipestages. Those pipestages are associated with one or more different CTAs that may execute in parallel with one another on a parallel processing unit. The developer specifies the configuration of the pipestages, including the configuration of the different CTAs across all pipestages, as well as the different processing operations performed by each different CTA. | 11-27-2014 |
20140359635 | PROCESSING DATA BY USING SIMULTANEOUS MULTITHREADING - A computer implemented method and system for data processing. The method including: (a) setting at least one SMT preliminary value for at least one operating node; (b) monitoring performance metrics for the at least one operating node set to the at least one SMT preliminary value; and (c) determining a SMT revised value based on performance metrics. The system including: a memory; a processor communicatively coupled to the memory; and a feature selection module communicatively coupled to the memory and processor, wherein the feature selection module is configured to perform steps of a method including: setting, using a setting device, at least one SMT preliminary value for at least one operating node; monitoring, using a monitoring device, performance metrics for the at least one operating node set to the at least one SMT preliminary value; and determining, using a determining device, a SMT revised value based on performance metrics. | 12-04-2014 |
20150033242 | Method for Automatic Parallel Computing - A method for automatic task-level parallelization of execution of a computer program with automatic concurrency control. According to this invention, shared data in memory must be queried. Such memory queries represent side-effects of their enclosing tasks and allow determining how tasks must be executed with regard to each other based on intersections of their queried data. Tasks that have intentions to modify the same data (their side-effects intersect) must be executed sequentially; otherwise, tasks can be executed in parallel. | 01-29-2015 |
20150052538 | PARALLEL PROCESSING SYSTEM - Software development tools and techniques for configuring parallel processing systems to execute software modules implementing processes for solving complex problems, including over-the-counter trading processes and foreign exchange trading processes, to execute quickly and efficiently. The parallel processing system may include low-cost, consumer-grade multicore processing units. A process for solving a complex problem may be divided into software modules, including by evaluating the process to determine discrete processing steps that produce an intermediate result on which later steps of the process depend. The software modules created for a process may form a template processing chain describing multiple processing chains of the process that are to be executed. A software development tool for producing configuration information for multicore processing units may evaluate the software modules and the processing chains to determine whether the modules will execute quickly and efficiently on the multicore processing units of the parallel processing system. | 02-19-2015 |
20150058866 | CALIBRATED TIMEOUT INTERVAL FOR CONCURRENT SHARED INACTIVITY TIMER - A processor-implemented method for implementing a shared counter architecture is provided. The method may include receiving, by a worker thread, an application request; recording, by a common timer thread, a shared timer value and acquiring, by the worker thread, the shared timer value. The method may further include recording, by the common timer thread, a shared calibration factor; acquiring, by the worker thread, a configuration value corresponding to the application request and generating, by the worker thread, a calibrated timeout interval for the application request based on the shared calibration factor, the shared timer value, and the configuration value. The method may further include registering, by the worker thread, the calibrated timeout interval for the application request on a current timeout list; determining, by the common timer thread, a timeout occurrence for the application request based on the registered calibrated timeout interval; and releasing resources based on the timeout occurrence. | 02-26-2015 |
20150067699 | Activity Interruption Management - In response to determining that an activity has been postponed (e.g., interrupted or deferred), a computer system stores a record indicating that the activity is postponed. In response to determining that another activity has become active, the computer system stores a record indicating that the other activity is active. The computer system reminds a user to return to the postponed in response to determining that a reminder condition associated with the postponed activity has been satisfied. For example, the computer system may remind the user to return to the postponed activity in response to determining that the other activity has been completed. | 03-05-2015 |
20150067700 | METHOD AND APPARATUS FOR PERFORMING TASK SCHEDULING IN TERMINAL - A method and an apparatus for performing task scheduling in a terminal are provided. The terminal includes at least two different types of cores and determines if a change in a task state has occurred in response to at least one of the two cores, If a change in a task state has occurred, the terminal determines, for said at least one core, the variation in the duration of each of a plurality of tasks being executed, predicts the duration of each of the plurality of tasks on the basis of the change in the task state using the determined variation, and performs task scheduling for said at least one core in accordance with the predicted duration. | 03-05-2015 |
20150074682 | PROCESSOR AND CONTROL METHOD OF PROCESSOR - A core executing processes in plural threads specifies one gate to read out a state of the gate from a thread progress control unit holding information of plural gates disposed in a loop state, setting a state of a gate disposed subsequently relative to a gate when a state of the gate is set to a first state to a second state, and setting the state of the gate to the first state when a certain period of time elapses from a first request of reading the state for the gate which is set to the second state, by every certain process in each thread. The core executes a next process when the state of the specified gate is the first state, and makes the execution of the next process wait until the state becomes the first state when it is not the first state. | 03-12-2015 |
20150082319 | High-Performance Parallel Traffic Management for Multi-Core Platforms - A method of traffic management implemented in a multi-core device comprising a first core and a second core, the method comprising receiving a first plurality of data flows for the first core and a second plurality of data flows for the second core, assigning a first thread running on the first core to the first plurality of data flows, assigning a second thread running on the second core to the second plurality of data flows, processing the first plurality of data flows using the first thread, and processing the second plurality of data flows using the second thread, wherein at least one of the first plurality of data flows and at least one of the second plurality of data flows are processed in parallel. | 03-19-2015 |
20150082320 | APPARATUS, METHOD, AND COMPUTER-READABLE RECORDING MEDIUM FOR PROCESSING DATA - A data processing apparatus includes a first storage that stores information pertaining to an order in which multiple process flows are executed, the execution of the multiple process flows being started by input of data and terminated by output of the data in a format usable for a user, a reception part that receives input data with respect to one of the multiple process flows, an execution part that executes the one of the multiple process flows on the input data, and a second storage that stores information indicating the one of the multiple process flows executed by the execution part. The execution part identifies a predetermined process flow to be executed with respect to the input data by referring to the first and second storages. The predetermined process flow is a process flow to be performed earliest among the multiple process flows that are not yet executed. | 03-19-2015 |
20150113542 | KNAPSACK-BASED SHARING-AWARE SCHEDULER FOR COPROCESSOR-BASED COMPUTE CLUSTERS - A method is provided for controlling a compute cluster having a plurality of nodes. Each of the plurality of nodes has a respective computing device with a main server and one or more coprocessor-based hardware accelerators. The method includes receiving a plurality of jobs for scheduling. The method further includes scheduling the plurality of jobs across the plurality of nodes responsive to a knapsack-based sharing-aware schedule generated by a knapsack-based sharing-aware scheduler. The knapsack-based sharing-aware schedule is generated to co-locate together on a same computing device certain ones of the plurality of jobs that are mutually compatible based on a set of requirements whose fulfillment is determined using a knapsack-based sharing-aware technique that uses memory as a knapsack capacity and minimizes makespan while adhering to coprocessor memory and thread resource constraints. | 04-23-2015 |
20150113543 | COORDINATING DEVICE AND APPLICATION BREAK EVENTS FOR PLATFORM POWER SAVING - Systems and methods of managing break events may provide for detecting a first break event from a first event source and detecting a second break event from a second event source. In one example, the event sources can include devices coupled to a platform as well as active applications on the platform. Issuance of the first and second break events to the platform can be coordinated based on at least in part runtime information associated with the platform. | 04-23-2015 |
20150128151 | Method And System For Mapping An Integral Into A Thread Of A Parallel Architecture - A method is disclosed for mapping an integral into a thread of a parallel architecture, in the course of which the integral is mapped into a summation expressed by coefficient values and summation values, and a directed graph is generated corresponding to the computation of the summation. Furthermore, in the course of the method a level of a traversal sequence to each of the nodes is assigned, respectively, and at each level off the traversal sequence, a storage location of the intermediate value corresponding to the edge connected with its input to the node corresponding to the given level is specified in a memory corresponding to the thread and including a register storage, a local storage, and a global storage. A system is also disclosed for mapping an integral into a thread of a parallel architecture. | 05-07-2015 |
20150135194 | Estimating Time Remaining for an Operation - Techniques for estimating time remaining for an operation are described. Examples operations include file operations, such as file move operations, file copy operations, and so on. A wide variety of different operations may be considered in accordance with the claimed embodiments, further examples of which are discussed below. In at least some embodiments, estimating a time remaining for an operation can be based on a state of the operation. A state of an operation, for example, can be based on events related to the operation itself, such as the operation being initiated, paused, resumed, and so on. A state of an operation can also be based on events related to other operations. | 05-14-2015 |
20150143384 | NETWORK SYSTEM, NETWORK NODE AND COMMUNICATION METHOD - Network system being configured to execute I/O commands and application commands in parallel and comprising a network and at least one network node, wherein the at least one network node is connected to the network via a network adapter and is configured to run several processes and/or threads in parallel, wherein the at least one network node comprises or is configured to establish a common communication channel (C-channel) to be used by the several processes and/or threads for data communication with the network via the network adapter, wherein the C-channel comprises or is established to comprise a work queue (WQ) for execution of I/O commands and a completion queue (CQ) for indication of a status of I/O commands, and wherein the at least one network node, especially its comprised or to be established C-channel, is configured for an exclusive access of precisely one single process or thread out of the several processes and/or threads to the CQ of the C-channel at a particular time. | 05-21-2015 |
20150150022 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - There is provided an information processing apparatus including a determination unit configured to determine, for each application, shift time length for each state shift while an application changes from a non-usable state to a usable state, and a control unit configured to shift a state of an application to the non-usable state, the application being specified on the basis of a result of the determination by the determination unit. | 05-28-2015 |
20150150023 | EMOTION PROCESSING SYSTEMS AND METHODS - A system for conducting parallelization of tasks is disclosed. The system includes an interface for receiving messages comprising a representation of logic describing two tasks to be executed in parallel, the message further comprising a content payload for use in the tasks. The system further includes a parallel processing grid comprising devices running on independent machines, each device comprising a processing manager unit and at least two processing units. The processing manager is configured to parse the received messages and to distribute the at least two tasks to the at least two processing units for independent and parallel processing relative to the content payload. | 05-28-2015 |
20150293781 | INFORMATION PROCESSING TERMINAL - An information processing terminal including an application execution portion, a sub-application execution portion and a hidden screen display portion is provided. The application execution portion executes an application. The sub-application execution portion executes a sub-application in response to an execution request from the application execution portion. The sub-application is configured to provide a specified function for the application executed by the application execution portion. The hidden screen display portion, instead of displaying an execution screen indicating execution of the sub-application, displays a hidden screen hiding the execution of the sub-application while the sub-application execution portion is executing the sub-application. | 10-15-2015 |
20150301867 | DETERMINISTIC REAL TIME BUSINESS APPLICATION PROCESSING IN A SERVICE-ORIENTED ARCHITECTURE - Methods, apparatus, and products for deterministic real time business application processing in a service-oriented architecture (‘SOA’), the SOA including SOA services, each SOA service carrying out a processing step of the business application where each SOA service is a real time process executable on a real time operating system of a generally programmable computer and deterministic real time business application processing according to embodiments of the present invention includes configuring the business application with real time processing information and executing the business application in the SOA in accordance with the real time processing information. | 10-22-2015 |
20150324240 | OPERATION OF SOFTWARE MODULES IN PARALLEL - Embodiments of computer-implemented methods, systems, computing devices, and computer-readable media (transitory and non-transitory) are described herein for accelerating a task that includes operation of a plurality of software modules among a plurality of parallel processing threads. In various embodiments, operation of the software modules may include postponement of operation of a first of the plurality of software modules in a first of the processing threads until a determination that the first software module is not deemed in operation. In various embodiments, the first software module may be deemed in operation while itself in operation or while awaiting completion of operation of any other software module called by the first software module. | 11-12-2015 |
20150331717 | TASK GROUPING BY CONTEXT - In an approach to grouping tasks initialized by a first user, one or more computer processors receive a first task initialization by a first user. The one or more computer processors determine whether one or more additional tasks contained in one or more task groups are in use by the first user. Responsive to determining one or more additional tasks contained in one or more task groups are in use, the one or more computer processors determine whether the first task is related to at least one task of the one or more additional tasks. Responsive to determining the first task is related to at least one task of the one or more additional tasks, the one or more computer processors add the first task to the task group containing the at least one related task of the one or more additional tasks. | 11-19-2015 |
20150331718 | NETWORK PROCESSOR HAVING MULTICASTING PROTOCOL - An network processor is described that is configured to multicast multiple data packets to one or more engines. In one or more implementations, the network processor includes an input/output adapter configured to parse a plurality of tasks. The input/output adapter includes a multicast module configured to determine a reference count value based upon a maximum multicast value of the plurality of tasks. The input/output adapter is also configured to set a reference count decrement value within the control data portion of the plurality of tasks. The reference count decrement value is based upon the maximum multicast value. The input/output adapter is also configured to decrement the reference count value by a corresponding reference count decrement value upon receiving an indication from an engine. | 11-19-2015 |
20150355938 | SYSTEM AND METHOD FOR CONDITIONAL TASK SWITCHING DURING ORDERING SCOPE TRANSITIONS - A data processing system includes a processor core and a hardware module. The processor core performs tasks on data packets. The ordering scope manager stores a first value in a first storage location. The first value indicates that exclusive execution of a first task in a first ordering scope is enabled. In response to a relinquish indicator being received, the ordering scope manager stores a second value in the first storage location. The second value indicates that the exclusively execution of the first task in the first ordering scope is disabled. | 12-10-2015 |
20150355940 | IDLE TIME ACCUMULATION IN A MULTITHREADING COMPUTER SYSTEM - Embodiments relate to idle time accumulation in a multithreading computer system. According to one aspect, a computer-implemented method for idle time accumulation in a computer system is provided. The computer system includes a configuration having a plurality of cores and an operating system (OS)-image configurable between single thread (ST) mode and a multithreading (MT) mode in a logical partition. The MT mode supports multiple threads on shared resources per core simultaneously. The method includes executing a query instruction on an initiating core of the plurality of cores. The executing includes obtaining, by the OS-image, a maximum thread identification value indicating a current maximum thread identifier of the cores within the logical partition. The initiating core also obtains a multithreading idle time value for each of the cores indicating an aggregate amount of idle time of all threads enabled on each of the cores in the MT mode. | 12-10-2015 |
20150355952 | INFORMATION PROCESSING SYSTEM AND PROGRAM MIGRATION METHOD - A first executing unit executes a first program by emulating information processing in a first operational environment in which the first program is executable. A generating unit generates, in parallel with the execution of the first program, a second program which is executable in a second operational environment of an information processing system and which is capable of acquiring the same processing result as the first program. A second executing unit terminates the execution of the first program by the first executing unit and also executing the second program, after the generation of the second program is completed. | 12-10-2015 |
20160019092 | METHOD AND APPARATUS FOR CLOSING PROGRAM, AND STORAGE MEDIUM - A method and an apparatus for closing a program, and a storage medium are provided. The method includes: opening, by a mobile terminal, a task management area of a multitasking processing queue, where a response area is provided in the task management area; detecting, by the mobile terminal, in the response area; and closing, by the mobile terminal, all programs in the multitasking processing queue in response to a specified operation of a user detected in the response area. An operation of closing a background program is simplified, and a user can easily close a background program just by performing a specified operation in a response area; therefore, the operation is simple and convenient. | 01-21-2016 |
20160041850 | COMPUTER SYSTEM AND CONTROL METHOD - When a CPU core ( | 02-11-2016 |
20160048412 | METHOD AND APPARATUS FOR SWITCHING APPLICATIONS - A method and an apparatus for switching applications are described. An embodiment of the method comprises the following steps: setting a first application as a resident application; displaying contemporaneously both a second application running in the foreground and an indication associated with the set resident application; and switching from the second application to the resident program so that the resident application runs in the foreground according to a preset condition for switching applications. | 02-18-2016 |
20160055032 | TASK TIME ALLOCATION METHOD ALLOWING DETERMINISTIC ERROR RECOVERY IN REAL TIME - A method for executing tasks of a real-time application on a multitasking computer, steps including: defining time-windows, each associated with the execution of processing operation of task of the application, allocating to each processing operation having time-window, time-quota and time-margin, time allocated to processing operation by time-quota and time-margin being shorter than duration of time-window of processing operation, during the execution of application, activating each processing operation at the start of time-window with which it is associated, on expiry of time-quota of one of processing operations, activating an error mode if the execution of processing operation has not been completed and, if error mode is active for one of processing operations, executing an error handling operation for processing operation, during remaining time allocated to processing operation by time-quota and time-margin. | 02-25-2016 |
20160055039 | TASK GROUPING BY CONTEXT - In an approach to grouping tasks initialized by a first user, one or more computer processors receive a first task initialization by a first user. The one or more computer processors determine whether one or more additional tasks contained in one or more task groups are in use by the first user. Responsive to determining one or more additional tasks contained in one or more task groups are in use, the one or more computer processors determine whether the first task is related to at least one task of the one or more additional tasks. Responsive to determining the first task is related to at least one task of the one or more additional tasks, the one or more computer processors add the first task to the task group containing the at least one related task of the one or more additional tasks. | 02-25-2016 |
20160062796 | Systems and Methods for Adaptive Integration of Hardware and Software Lock Elision Techniques - Particular techniques for improving the scalability of concurrent programs (e.g., lock-based applications) may be effective in some environments and for some workloads, but not others. The systems described herein may automatically choose appropriate ones of these techniques to apply when executing lock-based applications at runtime, based on observations of the application in the current environment and with the current workload. In one example, two techniques for improving lock scalability (e.g., transactional lock elision using hardware transactional memory, and optimistic software techniques) may be integrated together. A lightweight runtime library built for this purpose may adapt its approach to managing concurrency by dynamically selecting one or more of these techniques (at different times) during execution of a given application. In this Adaptive Lock Elision approach, the techniques may be selected (based on pluggable policies) at runtime to achieve good performance on different platforms and for different workloads. | 03-03-2016 |
20160077875 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - An information processing device includes an event detector, a task generator, a target time setting unit, and a presenting unit. The event detector detects occurrence of an event. The task generator generates at least one task indicating an action to be executed when the detected event occurs. The target time setting unit sets, for each of the tasks corresponding to the detected event, a target completion time of the task until when a state change generated due to occurrence of the event does not exceed a threshold, on the basis of an execution result of tasks corresponding to a past event related to the detected event. The presenting unit presents each of the tasks corresponding to the detected event and the target completion time of each of the tasks to a user. | 03-17-2016 |
20160077888 | SYSTEM AND METHOD FOR SUPPORTING COOPERATIVE NOTIFICATION OFFLOADING IN A DISTRIBUTED DATA GRID - A system and method for cooperative notification offloading supports thread notification offloading in a multi-threaded messaging system such as a distributed data grid. Pending notifiers are maintained in a collection of pending notifiers. A signaling thread processes a first notifier in the collection of pending notifiers to wake a first thread. The first awoken thread processes additional notifiers in the collection of pending notifiers to wake additional threads. The additional awoken threads can process additional notifiers in a cycle until all pending notifiers in the collection are processed. Such cooperative notification offloading of notifier processing from the signaling thread improves performance of the signaling thread with respect to other tasks thereby improving performance of the signaling thread and the distributed data grid. | 03-17-2016 |
20160085597 | MANAGEMENT METHOD, MANAGEMENT APPARATUS, AND INFORMATION PROCESSING SYSTEM - A management method executed by a management apparatus that manages a plurality of information processing apparatuses, the management method includes specifying a first time that is a time at which a predetermined number of information processing apparatuses that execute parallel processing are securable, by referring to information associating a content of processing to be executed by each of the plurality of information processing apparatuses, with a period in which the processing is to be executed; specifying one or more information processing apparatuses respectively having a first period, which is earlier than the first time and in which no processing is to be executed, from among the plurality of information processing apparatuses; and assigning the first period of each of the one or more information processing apparatuses, to preprocessing to be executed before the parallel processing. | 03-24-2016 |
20160092268 | SYSTEM AND METHOD FOR SUPPORTING A SCALABLE THREAD POOL IN A DISTRIBUTED DATA GRID - A system and method for supporting a scalable thread pool in a multi-threaded processing environments such as a distributed data grid. A work distribution system utilizes a collection of association piles to hold elements communicated between a service thread and multiple worker threads. Worker threads associated with the association piles poll elements in parallel. Polled elements are not released until returned from the worker thread. First in first out ordering of operations is maintained with respect to related elements by ensuring related elements are held in the same association pile and preventing polling of related elements until any previously polled and related elements have been released. By partitioning the elements across multiple association piles while ensuring proper ordering of operations with respect to related elements the scalable thread pool enables the use of large thread pools with reduced contention compared to a conventional single producer multiple consumer queue. | 03-31-2016 |
20160117192 | CONTROLLING EXECUTION OF THREADS IN A MULTI-THREADED PROCESSOR - Execution of threads in a processor core is controlled. The processor core supports simultaneous multi-threading (SMT) such that there can be effectively multiple logical central processing units (CPUs) operating simultaneously on the same physical processor hardware. Each of these logical CPUs is considered a thread. In such a multi-threading environment, it may be desirous for one thread to stop other threads on the processor core from executing. This may be in response to running a critical sequence or other sequence that needs the processor core resources or is manipulating processor core resources in a way that other threads would interfere with its execution. | 04-28-2016 |
20160139956 | MONITORING OVERTIME OF TASKS - A computer system monitors the execution time of each of a plurality of tasks over a plurality of time periods. The system receives a first input that selects a particular time period from the plurality of time periods, and further monitors the execution time of the plurality of tasks in the selected time period. The system receives a second input that selects a particular task from the particular time period, and monitors the execution time of the particular task in the particular time period. | 05-19-2016 |
20160154684 | DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD | 06-02-2016 |
20160162332 | PROACTIVE PRESENTATION OF MULTITASK WORKFLOW COMPONENTS TO INCREASE USER EFFICIENCY AND INTERACTION PERFORMANCE - A multitask workflow is proactively identified based upon user context information. For discrete tasks of the multitask workflow, modules directed to such tasks are identified from among other modules also directed to the same task, and are proactively presented to the user. Modules are selected based upon predetermined values associated with such modules, which can be indicative of capabilities, relationships, incentives associated with presentation of the modules to the user, and other like valuations. The modules offer visually enticing experiences to aid the user in performing a task, of the multitask workflow, and thereby increasing the user's interaction performance. Additionally, the modules exchange information to increase user efficiency in performing the multitask workflow. Multiple computing devices associated with a user can execute different modules of the multitask workflow, enabling two or more users to collaborate on the multitask workflow or otherwise research and perform tasks associated with the multitask workflow. | 06-09-2016 |
20160188363 | METHOD, APPARATUS, AND DEVICE FOR MANAGING TASKS IN MULTI-TASK INTERFACE - A method for managing a task in a terminal device is provided. The method includes displaying a multi-task interface. The multi-task interface includes one or more task interfaces, where at least one of the task interfaces includes a task presenting area and a task operation area, and the task operation area includes an operating element for performing a function of the task. The method may further include, based on a user selection of the operating element in the task operation area, running an application corresponding to the task and executing an operation corresponding to the operating element. | 06-30-2016 |
20160188379 | ADJUSTMENT OF EXECUTION OF TASKS - A system and method for distributed computing, including executing a job of distributed computing on compute nodes. The speed of parallel tasks of the job executing on the compute nodes are adjusted to increase performance of the job or to lower power consumption of the job, or both, wherein the adjusting is based on imbalances of respective speeds of the parallel tasks. | 06-30-2016 |
20160203032 | SERIES DATA PARALLEL ANALYSIS INFRASTRUCTURE AND PARALLEL DISTRIBUTED PROCESSING METHOD THEREFOR | 07-14-2016 |
20160378164 | ELECTRONIC APPARATUS AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM - Provided is an electronic apparatus including: a processor including a plurality of cores that divisionally execute a plurality of tasks; a total task amount calculation circuit that calculates, for each of the cores, a total task amount as a total processing amount of the plurality of tasks to be executed; a sleep shift processing circuit that causes one of the plurality of cores to shift to a sleep mode based on the total task amount calculated for each of the cores; | 12-29-2016 |
20160378544 | INTELLECTIVE SWITCHING BETWEEN TASKS - Methods, computer program products, and system are presented. The methods include, for instance: identifying, by one or more processor, a current task, obtaining, by the one or more processor, an indicator of a commencement of a switching event, where the switching event includes a transition originating from the current task and concluding at a new task, obtaining, by the one or more processor, behavior analysis data relating to a plurality of past switching events, where each past switching event includes a transition originating from the current task and concluding at a target task. The behavior analysis data includes a timestamp for each past switching event. The method also includes determining, by the one or more processor, based on the behavior analysis data, at least one recommended task, where the at least one recommended task includes at least one target task. | 12-29-2016 |
20160378545 | METHODS AND ARCHITECTURE FOR ENHANCED COMPUTER PERFORMANCE - Methods and systems for enhanced computer performance improve software application execution in a computer system using, for example, a symmetrical multi-processing operating system including OS kernel services in kernel space of main memory, by using groups of related applications isolated areas in user space, such as containers, and using a reduced set of application group specific set of resource management services stored with each application group in user space, rather than the OS kernel facilities in kernel space, to manage shared resources during execution of an application, process or thread from that group. The reduced sets of resource management services may be optimized for the group stored therewith. Execution of each group may be exclusive to a different core of a multi-core processor and multiple groups may therefore execute separately and simultaneously on the different cores. | 12-29-2016 |
20160378549 | Goal-Oriented, Socially-Connected, Task-Based, Incentivized To-Do List Application System and Method - A system may provide a socially connected application for managing a list of tasks, assignable by one or more assignors to be performed by one or more assignees. The method may provide a multi-platform application, including a method when executed on a processor include receiving a plurality of tasks to be performed, tracking completion, and tracking points associated with successful completion. Tasks may be assigned and confirmed completed by assignors, and performed by assignees. The system may include a point value system enabling redemption upon completion of tasks having point value or currency value. The point reward system may be integrated with other technologies to facilitate transfer of exemplary fiat or crypto currency, loyalty program points, desired product purchases on behalf of the assignee or other benefit. The method may include: receiving, by processor, a plurality of tasks; and managing, a point system associated with completion of each task. The system may provide instructional steps of how to perform a task. The system may provide proof of accomplishment, initiation, or completion of a task by the assignees to the assignors. The system may provide a listing of tasks for users to sign up for task assignments in return for a specified bounty. The system may receive and/or provide an assessment of the completed task by the assignors. | 12-29-2016 |
20190146837 | DISTRIBUTED REAL-TIME COMPUTING FRAMEWORK USING IN-STORAGE PROCESSING | 05-16-2019 |