Entries |
Document | Title | Date |
20080201715 | METHOD AND SYSTEM FOR DYNAMICALLY CREATING AND MODIFYING RESOURCE TOPOLOGIES AND EXECUTING SYSTEMS MANAGEMENT FLOWS - The present invention replaces the prior art Systems Management Flow execution environments with a new Order Processing Environment. The Order Processing Environment consists of an Order Processing Container (“Container” in short), a Relationship Registry, and a Factory Registry. The Factory Registry supports creation of new resource instances. The Relationship Registry stores relationships between resources. The Container gets as input an Order and a start point address for the first resource. The Order is a document (e.g., XML) which includes a number of Tasks for each involved resource without arranging those tasks in a sequence. This differentiates Orders from workflow descriptions used by standard workflow engines. Each Task includes at least all input parameters for executing the Task. The sequence of the Task execution is derived by the Container by using the Relationship Registry which reflects all current Resource Topologies. | 08-21-2008 |
20080201716 | ON-DEMAND MULTI-THREAD MULTIMEDIA PROCESSOR - A device includes a multimedia processor that can concurrently support multiple applications for various types of multimedia such as graphics, audio, video, camera, games, etc. The multimedia processor includes configurable storage resources to store instructions, data, and state information for the applications and assignable processing units to perform various types of processing for the applications. The configurable storage resources may include an instruction cache to store instructions for the applications, register banks to store data for the applications, context registers to store state information for threads of the applications, etc. The processing units may include an arithmetic logic unit (ALU) core, an elementary function core, a logic core, a texture sampler, a load control unit, a flow controller, etc. The multimedia processor allocates a configurable portion of the storage resources to each application and dynamically assigns the processing units to the applications as requested by these applications. | 08-21-2008 |
20080209427 | Hardware Register Access Via Task Tag Id - A computer-based software task management system ( | 08-28-2008 |
20080209428 | RESOURCE GOVERNOR CONFIGURATION MODEL - A database can have multiple requests applied at one time. Each of these requests requires a specific amount of server resources. There can be a differentiation of user-submitted workloads between each other. These workloads are a set of queries submitted by different users. Each query can have specific resource limits. In addition, each set can have specific resource limits. | 08-28-2008 |
20080209429 | METHODS AND SYSTEMS FOR MANAGING RESOURCES IN A VIRTUAL ENVIRONMENT - An embodiment relates generally to a method of managing resources in a virtual environment. The method includes detecting an instantiation of a virtual machine and determining a delay value based on a unique identifier. The method also includes delaying an initiation of at least one support process for the virtual machine by the delay value. | 08-28-2008 |
20080209430 | SYSTEM, APPARATUS, AND METHOD FOR FACILITATING PROVISIONING IN A MIXED ENVIRONMENT OF LOCALES - A system, a computer program product, and a method capable of dynamically and flexibly support a plurality of locales upon provisioning are provided. A management server connected via a network to a plurality of processing resources each set with a locale includes a storage unit to store processing, a locale, and a set of instructions corresponding to the processing and the locale, and a selection unit to select a set of instructions associated with required processing and a required locale by referring to the storage unit, and it further includes a determination unit to dynamically determine the required processing and the processing resource by way of provisioning, and the storage unit stores the plurality of processing resources and each locale. | 08-28-2008 |
20080209431 | System and method for routing tasks to a user in a workforce - A routing system and method efficiently routes tasks to users who are members of a large and geographically diverse workforce. Generally, limited information is known about each user's skills and behavioral factors. Based on a profile containing the known information about a user, task is efficiently allocated and routed to a user by matching attributes of the task to the profile using a neural network and a stochastic model. Feedback is collected by the routing system based on the user's handling of the task and on whether a solution provided by the user was accepted. Over time, as more feedback is collected, the profile and/or the neural network are refined which allows for more efficient routing of future tasks. | 08-28-2008 |
20080209432 | COMPUTER IMPLEMENTED METHOD AND SYSTEM FOR SHARING RESOURCES AMONG HIERARCHICAL CONTAINERS OF RESOURCES - Computer implemented method, system and computer usable program code for sharing resources among a plurality of containers in a data processing system. A computer implemented method includes creating a shared container for at least one resource to be shared. Then the at least one resource to be shared is moved from an original container of the at least one resource to the shared container, and a link is created between the original container and the at least one resource to be shared in the shared resource container. A link can also be created between a subject resource container and a shared resource in the shared resource container to enable the subject resource container to access and use the shared resource. A shared resource can also be removed from the shared resource container and returned to an original resource container when sharing of the resource is no longer desired. | 08-28-2008 |
20080209433 | Adaptive Reader-Writer Lock - A method and computer system for dynamically selecting an optimal synchronization mechanism for a data structure in a multiprocessor environment. The method determines a quantity of read-side and write-side acquisitions, and evaluates the data to determine an optimal mode for efficiently operating the computer system while maintaining reduced overhead. The method incorporates data received from the individual units within a central processing system, the quantity of write-side acquisitions in the system, and data which has been subject to secondary measures, such as formatives of digital filters. The data subject to secondary measures includes, but is not limited to, a quantity of read-side acquisitions, a quantity of write-side acquisitions, and a quantity of read-hold durations. Based upon the individual unit data and the system-wide data, including the secondary measures, the operating system may select the most efficient synchronization mechanism from among the mechanisms available. Accordingly, efficiency of a computer system may be enhanced with the ability to selectively choose an optimal synchronization mechanism based upon selected and calculated parameters. | 08-28-2008 |
20080216081 | System and Method For Enforcing Future Policies in a Compute Environment - The invention relates to a system, method and computer-reliable medium, as well as grids and clusters managed according to the method described herein. An example embodiment relates to a method of processing a request for resources within a compute environment. The method is practiced by a system that contains modules configured or programmed to carry out the steps of the invention. The system receives a request for resources, generates a credential map for each credential associated with the request, the credential map comprising a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map. The allocations or reservation for the request can then be made in an optimal way for parameters such as the earliest time possible based on available resources and also that maintains the constraints on the requestor. | 09-04-2008 |
20080216082 | Hierarchical Resource Management for a Computing Utility - This invention provides for the hierarchical provisioning and management of a computing infrastructure which is used to provide computing services to the customers of the service provider that operates the infrastructure. Infrastructure resources can include those acquired from other service providers. The invention provides architecture for hierarchical management of computing infrastructures. It allows the dynamic provisioning and assignment of resources to computing environments. Customers can have multiple computing environments within their domain. The service provider shares its resources across multiple customer domains and arbitrates on the use of resources between and within domains. The invention enables resources to be dedicated to a specific customer domain or to a specific computing environment. Customers can specify acquisition and distribution policy which controls their use of resources within their domains. | 09-04-2008 |
20080216083 | MANAGING MEMORY RESOURCES IN A SHARED MEMORY SYSTEM - The memory used by individual users can be tracked and constrained without having to place all the work from individual users into separate JVMs. The net effect is that the ‘bursty’ nature of memory consumption by multiple users can be summed to result in a JVM which exhibits much less bursty memory requirements while at the same time allowing individual users to have relatively relaxed constraints. | 09-04-2008 |
20080216084 | MEASURE SELECTION PROGRAM, MEASURE SELECTION APPARATUS, AND MEASURE SELECTION METHOD - A combination of measures are selected to set a recovery time of a business to be equal to or shorter than a time objective when a predetermined event occurs. A dependency relationship is shown between an operation constituting the business and resources necessary to continue the operation. Scenario information holds the recovery time required for a recovery when the predetermined event occurs for each of the resources. Measure information holds measures for reducing the recovery time and effects of the respective measures for each of the resources. Paths connecting a highest node to a terminal node of the resources included in the operation element related information are extracted according to the dependency relationship; and the combination of measures are selected so that a recovery time sum of the respective resources is equal to or shorter than the time objective on all the paths extracted by the resource path extraction procedure. | 09-04-2008 |
20080216085 | System and Method for Virtual Adapter Resource Allocation - A method, computer program product, and distributed data processing system that enables host software or firmware to allocate virtual resources to one or more system images from a single physical I/O adapter, such as a PCI, PCI-X, or PCI-E adapter, is provided. Adapter resource groups are assigned to respective system images. An adapter resource group is exclusively available to the system image to which the adapter resource group assignment was made. Assignment of adapter resource groups may be made per a relative resource assignment or an absolute resource assignment. In another embodiment, adapter resource groups are assigned to system images on a first come, first served basis. | 09-04-2008 |
20080222641 | Executing applications - An application executing apparatus comprising including at least one execution resource configured to execute at least one application is disclosed. The apparatus is provided with at least one processor configured to detect events triggering execution of the at least one application and to dynamically control use of the at least one execution resource in handling of the detected events based on a variable reflective of the operating conditions of the apparatus. | 09-11-2008 |
20080222642 | Dynamic resource profiles for clusterware-managed resources - Allowing for resource attributes that may change dynamically while the resource is in use, provides for dynamic changes to the manner in which such resources are managed. Management of dynamic resource attributes by clusterware involves new entry points to clusterware agent modules, through which resource-specific user-specified instructions for discovering new values for resource attributes, and for performing a user-specified action in response to the new attribute values, are invoked. A clusterware policy manager may know ahead of time that a particular resource has dynamic attributes or may be notified when a resource's dynamic attribute has changed and, periodically or in response to the notification, request that the agent invoke the particular resource-specific instructions for discovering new values for attributes for the particular resource and/or for performing a user-specified action in response to the new attribute values. During the majority of this process, the resource remains available. | 09-11-2008 |
20080222643 | COMPUTING DEVICE RESOURCE SCHEDULING - Systems and methods for scheduling computing device resources include a scheduler that maintains multiple queues. Requests are placed in one of the multiple queues depending on how much resource time the requests are to receive and when they are to receive it. The queue that a request is placed into depends on a pool bandwidth defined for a pool that includes the request and a bandwidth request. A request has an importance associated therewith that is taken into account in the scheduling process. The scheduler proceeds through the queues in a sequential and circular fashion, taking a work item from a queue for processing when that queue is accessed. | 09-11-2008 |
20080222644 | RISK-MODULATED PROACTIVE DATA MIGRATION FOR MAXIMIZING UTILITY IN STORAGE SYSTEMS - The embodiments of the invention provide a method, computer program product, etc. for risk-modulated proactive data migration for maximizing utility. More specifically, a method of planning data migration for maximizing utility of a storage infrastructure that is running and actively serving at least one application includes selecting a plurality of potential data items for migration and selecting a plurality of potential migration destinations to which the potential data items can be moved. Moreover, the method selects a plurality of potential migration speeds at which the potential data items can be moved and selects a plurality of potential migration times at which the potential data items can be moved to the potential data migration destinations. The selecting of the plurality of potential migration speeds selects a migration speed below a threshold speed, wherein the threshold speed defines a maximum system utility loss permitted. | 09-11-2008 |
20080222645 | Process Execution Management Based on Resource Requirements and Business Impacts - Techniques are presented for managing execution of processes on a data processing system The data processing system comprises process instances that are each an execution of a corresponding process. Each process instance comprises activity instances. Business impacts are determined for the process instances, the activity instances, or both. Order of execution of the activity instances is managed by allocating resources to activity instances in order to achieve an objective defined in terms of the business impacts. In another embodiment, requests are received for the execution of the processes. For a given request, one or more of the operations of assigning, updating, aggregating, and weighting of first business impacts associated with the given request are performed to create second business impacts associated with the given request. Additionally, requests can be modified. Modification can include changing the process requested or process input as deemed appropriate, combining related requests into a single request, or both. Unmodified requests and any modified requests are managed. | 09-11-2008 |
20080229318 | Multi-objective allocation of computational jobs in client-server or hosting environments - A method of processing a computational job with a plurality of processors is disclosed. A request to process a job is received, where the job has a priority level associated with the job. A first group of the processors is designated as being available to process the job, where the number of processors in the first group is based on the priority level associated with the job. A second group of the processors is designated as being available to process the job, where for each processor in the second group a current utilization rate of the processor is less than a second predetermined utilization rate. Then, the job is processed with one or more of the processors selected from the first group of processors and the second group of processors. | 09-18-2008 |
20080229319 | Global Resource Allocation Control - Improved workload management is provided by introducing a global resource allocation control mechanism in a service layer, which may be located above or within the host operating system. The mechanism arbitrates how, when, and by which application resources of all types are being consumed. | 09-18-2008 |
20080229320 | Method, an apparatus and a system for controlling of parallel execution of services - According to an aspect of an embodiment, a method for controlling a plurality of nodes for executing a plurality of services, each of the services comprising a plurality of job nets which are to be executed sequentially, the method comprising: allocating at least one node for each of said services and initiating execution of said services by said nodes; obtaining weight information of job nets instantaneously executed for each of the services; and dynamically changing the allocation of the nodes for the services in accordance with the weight information. | 09-18-2008 |
20080229321 | QUALITY OF SERVICE SCHEDULING FOR SIMULTANEOUS MULTI-THREADED PROCESSORS - A method and system for providing quality of service guarantees for simultaneous multithreaded processors are disclosed. Hardware and operating system communicate with one another providing information relating to thread attributes for threads executing on processing elements. The operating system controls scheduling of the threads based at least partly on the information communicated and provides quality of service guarantees. | 09-18-2008 |
20080235699 | SYSTEM FOR PROVIDING QUALITY OF SERVICE IN LINK LAYER AND METHOD USING THE SAME - A system and method of providing a quality of service (QoS) is provided. The method of providing the QoS in the link layer includes receiving, by a stream providing device, minimum and maximum resource requirement information of a stream receiving device; transmitting, by the stream providing device, a reservation message including the minimum and maximum resource requirement information; allocating a resource, by at least one bridge, based on the reservation message transmitted from the stream providing device; and receiving, by the stream receiving device, a stream transmitted from the stream providing device via the resource. | 09-25-2008 |
20080235700 | Hardware Monitor Managing Apparatus and Method of Executing Hardware Monitor Function - A hypervisor OS includes a monitor context table in which plural monitor contexts each including monitor operation conditions and information concerning priority are set in order to set a hardware monitor function for monitoring operation states of plural physical processors that execute plural processes in parallel. The hypervisor OS causes the hardware monitor function to execute on a monitor context with high priority satisfying a monitor operation condition, for acquiring monitor data and outputting the monitor data together with timing data indicating time when the monitor operation condition is satisfied and outputs timing data indicating time when the monitor operation condition is satisfied, on a monitor context satisfying a monitor operation condition but having low priority. | 09-25-2008 |
20080235701 | ADAPTIVE PARTITIONING SCHEDULER FOR MULTIPROCESSING SYSTEM - A symmetric multiprocessing system includes multiple processing units and corresponding instances of an adaptive partition processing scheduler. Each instance of the adaptive partition processing scheduler selectively allocates the respective processing unit to run process threads of one or more adaptive partitions based on a comparison between merit function values of the one or more adaptive partitions. The merit function for a particular partition of the one or more adaptive partitions may be based on whether the adaptive partition has available budget on the respective processing unit. The merit function for a particular partition associated with an instance of the adaptive partition scheduler also, or in the alternative, may be based on whether the adaptive partition has available global budget on the symmetric multiprocessing system. | 09-25-2008 |
20080235702 | Componentized Automatic Provisioning And Management Of Computing Environments For Computing Utilities - The present invention provides systems, methods and apparatus for automatically provisioning and managing re-sources in a computing utility. Its automation procedures are based on a resource model which allows resource specific provisioning and management tasks to be encapsulated into components for reuse. These components are assembled into more complex structures and finally computing services. This invention provides a method for constructing a computing service from a set of resources given a high level specification. Once constructed, the service includes a component that provides management function, which can allow modification of its underlying set of resources. | 09-25-2008 |
20080235703 | On-Demand Utility Services Utilizing Yield Management - Techniques for provision of on-demand utility services utilizing a yield management framework are disclosed. For example, in one illustrative aspect of the invention, a system for managing one or more computing resources associated with a computing center comprises: (i) a resource management subsystem for managing the one or more computing resources associated with the computing center, wherein the computing center is able to provide one or more computing services in response to one or more customer demands; and (ii) a yield management subsystem coupled to the resource management subsystem, wherein the yield management subsystem optimizes provision of the one or more computing services in accordance with the resource management subsystem and the one or more computing resources. | 09-25-2008 |
20080244594 | VISUAL SCRIPTING OF WEB SERVICES FOR TASK AUTOMATION - Tasks are automated using assemblies of services. An interface component allows a user to collect services and to place selected services corresponding to a task to be automated onto a workspace. An analysis component performs an analysis of available data with regard to the selected services provided on the workspace and a configuration component automatically configures inputs of the selected services based upon the analysis of available data without intervention of the user. A dialog component is also provided to allow the user to contribute information to configure one or more of the inputs of the selected services. When processing is complete, an output component outputs a script that is executable to implement the task to be automated. | 10-02-2008 |
20080244595 | METHOD AND SYSTEM FOR CONSTRUCTING VIRTUAL RESOURCES - System for managing a life cycle of a virtual resource. One or more virtual resources are defined. The one or more defined virtual resources are created. The created virtual resources are instantiated. Then, a topology of a virtual resource is constructed using a plurality of virtual resources that are in at least one of a defined, a created, or an instantiated state. | 10-02-2008 |
20080244596 | COMPUTER PROGRAM PRODUCT AND SYSTEM FOR DEFERRING THE DELETION OF CONTROL BLOCKS - A computer program product and system are disclosed for deferring the deletion of resource control blocks from a resource queue within an information management system that includes a plurality of short-term processes and a plurality of long-term processes when each of the long term processes has unset a ‘resource in use’ control flag for that long term process, a ‘request deletion’ flag has been set by the information management system, and a predetermined amount of time has elapsed. | 10-02-2008 |
20080244597 | Systems and Methods for Recording Resource Association for Recording - Included are embodiments for determining an extension-to-channel mapping. At least one embodiment includes receiving first data associated with a communication from at least one communications device and receiving second data from a recording resource. Some embodiments include determining whether the at least one communications device is coupled to a recording resource. Some embodiments include matching the communications device to a recording resource and in response to matching, creating an association of the at least one communications device to the recording resource. | 10-02-2008 |
20080244598 | SYSTEM PARTITIONING TO PRESENT SOFTWARE AS PLATFORM LEVEL FUNCTIONALITY - Embodiments of apparatuses, methods for partitioning systems, and partitionable and partitioned systems are disclosed. In one embodiment, a system includes processors and a partition manager. The partition manager is to allocate a subset of the processors to a first partition and another subset of the processors to a second partition. The first partition is to execute first operating system level software and the second partition is to execute second operating system level software. The first operating system level software is to manage the processors in the first partition as resources individually accessible to the first operating system level software, and the second operating system level software is to manage the processors in the second partition as resources individually accessible to the second operating system level software. The partition manager is also to present the second partition, including the second operating system level software, to the first operating system level software as platform level functionality embedded in the system. | 10-02-2008 |
20080244599 | Master And Subordinate Operating System Kernels For Heterogeneous Multiprocessor Systems - Systems and methods establish communication and control between various heterogeneous processors in a computing system so that an operating system can run an application across multiple heterogeneous processors. With a single set of development tools, software developers can create applications that will flexibly run on one CPU or on combinations of central, auxiliary, and peripheral processors. In a computing system, application-only processors can be assigned a lean subordinate kernel to manage local resources. An application binary interface (ABI) shim is loaded with application binary images to direct kernel ABI calls to a local subordinate kernel or to the main OS kernel depending on which kernel manifestation is controlling requested resources. | 10-02-2008 |
20080244600 | Method and system for modeling and analyzing computing resource requirements of software applications in a shared and distributed computing environment - An application manager for enabling multiple applications to share resources in a shared and distributed computing environment. The disclosed system provides for the specification, representation and automatic analysis of resource requirements of applications in a shared and distributed computing environment. The application manager is provided with service specifications for each application, which defines the resource requirements necessary or preferred to run said application (or more precisely, its constituent application components). In addition, the resources may be required to have certain characteristics and constraints may be placed on the required resources. The application manager works in conjunction with a resource supply manager and requests the required resources be supplied for the application. If there are appropriate and sufficient available resources to meet the particular resource requirements, then the resources are allocated, and the application components mapped thereon. The disclosed system can enable the sharing of resources among multiple heterogeneous applications. The systems can allow resource sharing without application source code access or any knowledge of the internal design of the application. Integration of an application can be re-used for other similar applications. Furthermore, the disclosed system enables the dynamic and efficient management of shared resources, providing an agile resource infrastructure adaptive to dynamic changes and failures. | 10-02-2008 |
20080244601 | Method and apparatus for allocating resources among backup tasks in a data backup system - Method and apparatus for allocating resources among backup tasks in a data backup system is described. One aspect of the invention relates to managing backup tasks in a computer network. An estimated resource utilization is established for each of the backup tasks based on a set of backup statistics. A resource reservation is allocated for each of the backup tasks based on the estimated resource utilization thereof. The resource reservation of each of the backup tasks is dynamically changed during performance thereof. | 10-02-2008 |
20080244602 | Method for task and resource management - A method is disclosed for managing one or more tasks or human resources. In one embodiment, the method receives one or more tasks. The method determines at least one task evaluation criteria value for each received one or more tasks. In addition, the method determines a task value associated with each received one or more tasks based on the determined at least one task evaluation criteria value. | 10-02-2008 |
20080244603 | Method for task and resource management - A method is disclosed for managing one or more tasks or human resources. In one embodiment, the method receives one or more first tasks. In addition, the method receives one or more first sets of skill information. Each of the one or more first sets of skill information includes at least one human resource skill and is associated with a human resource. The method further receives one or more second sets of skill information. Each of the one or more second sets of skill information includes at least one task skill and is associated with one of the one or more first tasks. Additionally, the method evaluates the received one or more first tasks, the received one or more first sets of skill information, and the received one or more second sets of skill information. Further, the method determines to request the human resource to add an associated human resource skill or increase an associated human resource skill level. | 10-02-2008 |
20080244604 | Method for task and resource management - A method is disclosed for task and human resource management. In one embodiment, the method stores a plurality of first tasks, each first task including at least one first task skill. In addition, the method receives a search request, the search request including at least one search request skill. The method determines, based on the one or more first tasks, if one or more of the at least one first task skills corresponds to the at least one search request skill. In addition, the method determines one or more second tasks when it is determined that one or more of the at least one first task skills corresponds to the at least one search request skill. The one or more second tasks are determined from the plurality of first tasks. The method provides the determined one or more second tasks to a human resource. Further, the method receives a request from the human resource to be associated with at least one of the determined one or more second tasks, and associates the human resource with the at least one of the determined one or more second tasks. | 10-02-2008 |
20080244605 | Method for task and resource management - A method is disclosed for task and human resource management. In one embodiment, the method determines a set of skill information. The set of skill information includes at least one task skill and is associated with a task. In addition, the method determines, from a set of one or more first human resources, one or more second human resources. The one or more second human resources have at least one human resource skill that corresponds to the at least one task skill. The method provides an indication of a task load for the determined one or.more second human resources, and associates the task to at least one of the one or more second human resources based on the at least one human resource skill, the at least one task skill, and the indication of the task load. | 10-02-2008 |
20080244606 | Method and system for estimating resource provisioning - A method and system are described for estimating resource provisioning. An example method may include obtaining a workflow path including an external invocation node and respective groups of service nodes, node connectors, and hardware nodes, and including a directed ordered path indicating ordering of a flow of execution of services associated with the service nodes, from the external invocation node, to a hardware node, determining an indicator of a service node workload based on attribute values associated with a service node and an indicator of a propagated workload based on combining attribute values associated with the external invocation node and other service nodes or node connectors preceding the service node in the workflow path based on the ordering, and provisioning the service node onto a hardware node based on combining the indicator of the service node workload and an indicator of a current resource demand associated with the hardware node. | 10-02-2008 |
20080244607 | Economic allocation and management of resources via a virtual resource market - Allocating distributed computing resources comprises creating offers to provide the resources for use by application programs. Each offer specifies a performance characteristic and a value associated with a corresponding resource. Bids to obtain the resources for use by the application programs are created. Each bid specifies a service level required for operation of a corresponding application program and a value associated with operating the corresponding application program. Bids are matched to offers via a market exchange model by matching the service level requirement and value of each bid to the performance characteristic and value of one of the offers. Resources associated with each offer are allocated to the application program associated with a matching bid, and the application program's operations are migrated to the allocated resources. Resources are monitored to ensure compliance with the service level requirement of each bid, and non-complying resources are replaced via the market exchange model. | 10-02-2008 |
20080244608 | Multiprocessor system and access protection method conducted in multiprocessor system - In a conventional multiprocessor system, an access right with respect to a shared resource could not be changed in a flexible manner. The present invention provides a multiprocessor system having a first processor element (PE-A) and a second processor element (PE-B), the first processor element (PE-A) and the second processor element (PE-B) independently executing a program, in which the first processor element (PE-A) includes: a central processing unit (CPUa) for performing an operation processing based upon the program; a shared resource ( | 10-02-2008 |
20080244609 | ASSURING RECOVERY OF TEMPORARY RESOURCES IN A LOGICALLY PARTITIONED COMPUTER SYSTEM - A capacity manager provides temporary resources on demand in a manner that assures the temporary resources may be recovered when the specified resource-time expires. Access to minimum resource specifications corresponding to the logical partitions is controlled to prevent the sum of all minimum resource specifications from exceeding the base resources on the system. By assuring the sum of minimum resource specifications for all logical partitions is satisfied by the base resources on the system, the temporary resources may always be recovered when required. | 10-02-2008 |
20080244610 | Method and Apparatus for Dynamic Device Allocation for Managing Escalation of On-Demand Business Processes - Resource allocation techniques are provided for use in managing escalation of on-demand business processes. For example, in one aspect of the invention, a technique for managing escalation of a business process comprises the following steps/operations. A request is obtained from a business process, the business process having one or more tasks associated therewith. The one or more tasks are mapped to one or more roles. One or more available resources are allocated for the one or more roles. At least one communication session is launched such that data associated with the business process may be transferred to the one or more allocated resources. | 10-02-2008 |
20080250416 | Linking of Scheduling Systems for Appointments at Multiple Facilities - Scheduling systems for scheduling appointments on multiple sites need to be linked, if such systems use different databases. The activity to be performed by the performing site during the appointment may be given by a requesting code, specific for the requesting site. If the activity can be performed at the requesting site, i.e. the requesting site and the performing site are identical, then this “requesting code” may define that one or more resources are required for performing the scheduled appointment at the requesting site. The availability of these resources can be fetched from one or more databases coupled to the requesting site. If the performing site is different from the requesting site, the requesting code used at the performing site for the activity may be different from the requesting code used at the requesting site, and different resources may be requested by the performing site. The availability of these different resources may be stored in one or more databases, different from the databases for resources at the requesting site. In the latter case, both the requesting site and the performing site keep records of the scheduled appointment e.g. in a respective database. If a person, for whom the appointment is made, is known at the requesting or performing site or both, person occupation checking may be done at either site or both. | 10-09-2008 |
20080250417 | Application Management Support System and Method - A first information resource denoting which logical volume is allocated to which application program is prepared in a management computer. The management computer either regularly or irregularly acquires from the storage system information as to which logical volumes were updated at what times, registers same in a second information resource, references the first and second information resources, acquires update management information, which is information denoting which logical volume is updated at what time, and the application program to which this logical volume is allocated, and sends this update management information to a host computer. The host computer, based on the update management information from the management computer, displays which logical volume has been updated at what time, and which application program is allocated to this logical volume. | 10-09-2008 |
20080250418 | HEALTH CARE ADMINISTRATION SYSTEM - Embodiments of the present invention provide systems and methods for managing an event in a health care organization, the method comprising standardizing, during a design phase, a workflow associated with an event; and executing, during an executing phase, the workflow to complete a procedure associated with the event. Other embodiments may be described and claimed. | 10-09-2008 |
20080250419 | METHOD AND SYSTEM FOR MANAGING RESOURCE CONNECTIONS - Methods and system for managing resource connections are described. In one embodiment, a user request associated with a centralized resource may be received. Availability of a connection to the centralized resource may be determined. A stagger delay for connection creation may be determined. The stagger delay may define a delay for creation of a new connection. The new connection to the centralized resource may be created based on the determining of whether the connection to the centralized resource is available and the delay interval. The new connection may be utilized to process the user request. | 10-09-2008 |
20080250420 | Jobstream Planner Considering Network Contention & Resource Availability - Disclosed is a computer-implemented planning process that aids a system administrator in the task of creating a job schedule. The process treats enterprise computing resources as a grid of resources, which provides greater flexibility in assigning resources to jobs. During the planning process, an administrator or other user, or software, builds a job-dependency tree. Jobs are then ranked according to priority, pickiness, and network centricity Difficult and problematic jobs then are assigned resources and scheduled first, with less difficult jobs assigned resources and scheduled afterwards. The resources assigned to the most problematic jobs then are changed iteratively to determine if the plan improves. This iterative approach not only increases the efficiency of the original job schedule, but also allows the planning process to react and adapt to new, ad-hoc jobs, as well as unexpected interruptions in resource availability. | 10-09-2008 |
20080256545 | Systems and methods of managing resource utilization on a threaded computer system - Embodiments of the invention relate generally to incremental computing. Specifically, embodiments of the invention include systems and methods for the concurrent processing of multiple, incremental changes to a data value while at the same time monitoring and/or enforcing threshold values for that data value. Embodiments of the invention also include systems and methods of managing utilization of a resource of a computer system having a number of threads. | 10-16-2008 |
20080256546 | Method for Allocating Programs - In one embodiment, a method for allocating programs to resources suited to operating conditions thereof comprises generating composition management information for a plurality of resources based on management information relating to performance and capacity of each of the resources. The composition management information includes identification information for the resources used by a plurality of programs. The method further comprises searching for and locating the composition management information of a resource identified by the identification information for each of the programs, based on the composition management information of the resources, and generating program information which associates composition management information of each of the programs with the composition management information of the located resource; and outputting information indicating that a resource abnormality has occurred with one of the programs, in cases where the composition management information of the resource which is associated with the program in the program information corresponds to one or more rules for detecting a resource abnormality in the program. | 10-16-2008 |
20080256547 | Method and System For Managing a Common Resource In a Computing System - The invention, in one embodiment, provides a method for acquiring and releasing a lock over a common resource in a computing system. After a lock has been acquired over a common resource. A determination ( | 10-16-2008 |
20080263556 | REAL-TIME SYSTEM EXCEPTION MONITORING TOOL - Techniques for monitoring resources of a computer system are provided. A monitoring process collects and reports utilization data for one or more resources of a computer system, such as CPU, memory, disk I/O, and network I/O. Instead of reporting just an average of the collected data over a period of time (e.g., 10 seconds), the monitoring process at least reports individually collected resource utilization values. If one or more of the utilization values exceed specified thresholds for the respective resources, then an alert may be generated. In one approach, the monitoring process is made a real-time priority process in the computer system to ensure that the memory used by the monitoring process is not swapped out of memory. Also, being a real-time priority process ensures that the monitoring process obtains a CPU in order collect resource utilization data even when the computer system is in a starvation mode. | 10-23-2008 |
20080263557 | SCHEDULING METHOD AND SYSTEM, CORRESPONDING COMPUTATIONAL GRID AND COMPUTER PROGRAM PRODUCT - A scheduler device schedules executions of jobs using resources of a computational grid. The scheduler is configured for identifying an equilibrium threshold between resources and jobs. Below the equilibrium threshold, the scheduler schedules the execution of the jobs using the resources of the computational grid according to Pareto-optimal strategies. Above the equilibrium threshold, the scheduler schedules the execution of the jobs using the resources of the computational grid according to Nash-equilibrium strategies. | 10-23-2008 |
20080263558 | METHOD AND APPARATUS FOR ON-DEMAND RESOURCE ALLOCATION AND JOB MANAGEMENT - The invention is a method and apparatus for on-demand resource planning for unified messaging services. In one embodiment, multiple clients are served by a single system, and existing system resources are allocated among all clients in a manner that optimizes system output and service provider profit without the need to increase system resources. In one embodiment, resource allocation and job scheduling are guided by individual service level agreements between the service provider and the clients that dictate minimum service levels that must be achieved by the system. Jobs are processed in a manner that at least meets the specified service levels, and the benefit or profit derived by the service provider is maximized by prioritizing incoming job requests within the parameters of the specified service levels while meeting the specified service levels. Thus, operation and hardware costs remain substantially unchanged, while system output and profit are maximized. | 10-23-2008 |
20080263559 | METHOD AND APPARATUS FOR UTILITY-BASED DYNAMIC RESOURCE ALLOCATION IN A DISTRIBUTED COMPUTING SYSTEM - In one embodiment, the present invention is a method for allocation of finite computational resources amongst multiple entities, wherein the method is structured to optimize the business value of an enterprise providing computational services. One embodiment of the inventive method involves establishing, for each entity, a service level utility indicative of how much business value is obtained for a given level of computational system performance. The service-level utility for each entity is transformed into a corresponding resource-level utility indicative of how much business value may be obtained for a given set or amount of resources allocated to the entity. The resource-level utilities for each entity are aggregated, and new resource allocations are determined and executed based upon the resource-level utility information. The invention is thereby capable of making rapid allocation decisions, according to time-varying need or value of the resources by each of the entities. | 10-23-2008 |
20080263560 | STRUCTURE FOR SECURING LEASED RESOURCES ON A COMPUTER - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is for securing of leased resources on a computer. The design structure includes a computer for securing resources may comprise at least one processor, a plurality of resources, wherein each resource is associated with configuration data and a programmable logic device connected to each of the plurality of resources. The programmable logic device may be configured for determining whether a resource is leased, reading un-encoded configuration data from a resource, and sending the configuration data to a first unit, if the resource is not leased. The programmable logic device may further be configured for reading encoded configuration data from a resource, decoding the configuration data, sending the configuration data that was decoded to a first unit, and logging use of the resource by the first unit, if the resource is leased. | 10-23-2008 |
20080263561 | Information processing apparatus, computer and resource allocation method - The present invention provides a new resource allocation technique that allows for each partition to surely and automatically, without using manpower, use a proper amount of resources in accordance with the load when a structure is employed in which the inside of a computer is divided into a plurality of partitions and each partition performs data processing using the allocated resources. Storage unit for storing schedule information describing what amount of resources is allocated to a time range of which period or what time is prepared for each partition. In consideration of the fact that the usage of resources can be often figured out in advance, the present invention obtains an amount of resources stored in association with the time range to which the current time belongs from the storage unit, and controls such that each partition uses the obtained amount of resources to perform data processing. | 10-23-2008 |
20080271030 | Kernel-Based Workload Management - A method for managing workload in a computing system comprises performing automated workload management arbitration for a plurality of workloads executing on the computing system, and initiating the automated workload management arbitration from a process scheduler in a kernel. | 10-30-2008 |
20080271031 | Resource Partition Management in Kernel Space - A method for managing resources in a computing system comprises providing a process initiation function which initiates a process and executing from a kernel an application manager that places the process into a resource partition at process initiation. | 10-30-2008 |
20080271032 | Data Processing Network - A grid type network comprising a grid controller for receiving data in the form of a queue from a database. The grid controller is arranged to divide the data into a plurality of batches and dispatch the batches between a plurality of terminals which may be registered with the grid controller. Each terminal is registered on the basis that it contains a processing unit which is usually in an idle state. The terminals are also provided with processing logic related to the processing to be carried out on the batches. The plurality of terminals perform the processing on the batches and on completion, the database is updated with processed data. | 10-30-2008 |
20080271033 | INFORMATION PROCESSOR AND INFORMATION PROCESSING SYSTEM - According to one embodiment, an information processing apparatus in which software resources are divided into first through N-th groups each of which has an operating system, a program operating on the operating system, and data, includes an execution section configured to simultaneously execute the groups with the groups isolated from one another, an OS activating section configured to operate on the operating system of the first group and activate the operating system of at least one of the second through N-th groups according to activation information, an activation information changing section configured to make communication with an administrative server over a network and change the activation information in response to an instruction from the administrative server, and a lock section configured to disable the operating system and the program of each of the second through N-th groups to change the activation information. | 10-30-2008 |
20080271034 | RESOURCE ALLOCATION SYSTEM, RESOURCE ALLOCATION METHOD, AND RESOURCE ALLOCATION PROGRAM - Disclosed is a resource allocation system including a provisional allocation execution unit that executes provisional allocation for policies other than a policy corresponding to an accepted source request, a shared resource extraction unit that extracts a resource sharable between the policy and other policies, and a determination index calculation unit that calculates an index that depends on resource sharability, and determines an allocation destination so that a storage area is allocated on a storage device with a lower resource sharability in preference to other storage devices. | 10-30-2008 |
20080271035 | Control Device and Method for Multiprocessor - An multiprocessor control device according to an example of the invention comprises a selection unit which, on the basis of an execution schedule for tasks to be allocated to any one of processor elements, selects, for each of the processor elements, any one of a normal mode used in a task execution time, a first mode which is used when a task is not executed and in which a power consumption is reduced more than in the normal mode, and a second mode which is used when the task is not executed and which has a greater power consumption reducing effect but a longer mode switching time than the first mode, and a mode control unit which performs control according to the mode selected by the selection unit for each of the processor elements. | 10-30-2008 |
20080271036 | METHOD AND APPARATUS FOR ASSIGNING FRACTIONAL PROCESSING NODES TO WORK IN A STREAM-ORIENTED COMPUTER SYSTEM - An apparatus and method for making fractional assignments of processing elements to processing nodes for stream-based applications in a distributed computer system includes determining an amount of processing power to give to each processing element. Based on a list of acceptable processing nodes, a determination of fractions of which processing nodes will work on each processing element is made. To update allocations of the amount of processing power and the fractions, the process is repeated. | 10-30-2008 |
20080276243 | Resource Management Platform - In client-server architectures, systems and methods for implementing an extensible resource management platform at a server are described. The extensible resource management platform is developed based on a plug-in based architecture which includes one or more subsystems for performing functions associated with resource management. Different implementations can be provided by new or different components or plug-ins. The resource management platform is thus a platform over which one or more functionalities can be further added to supplement existing and varying functions. | 11-06-2008 |
20080276244 | SYSTEM AND METHOD FOR ADAPTIVELY COLLECTING PERFORMANCE AND EVENT INFORMATION - A method for communicating information from a first computing node to at least one of the following: a storage device and a second computing node. The first computing node is monitored to collect at least one estimate of available resources, and based on this estimate, an amount of data collected is modified. Then, the modified data is sent to at least one of the following: the storage device and the second computing node. This invention also provides for the determination of an optimum batch size for aggregating data wherein, for a number of batch sizes, costs are estimated for sending batched information to persistent storage and for losing batched data. Then, the optimum batch size is selected from the number of different batch sizes based on sums of these costs. This invention also provides for selective compression of data, wherein it is determined which of a number of compression algorithms do not incur an overhead that exceeds available resources. Then, one of the determined algorithms is selected to maximize compression. | 11-06-2008 |
20080276245 | Optimization with Unknown Objective Function - Nonlinear optimization is applied to resource allocation, as for example, buffer pool optimization in computer database software where only the marginal utility is known. The method for allocating resources comprises the steps of starting from an initial allocation, calculating the marginal utility of the allocation, calculating the constraint functions of the allocation, and applying this information to obtain a next allocation and repeating these steps until a stopping criteria is satisfied, in which case a locally optimal allocation is returned. | 11-06-2008 |
20080276246 | SYSTEM FOR YIELDING TO A PROCESSOR - An apparatus and program product for coordinating the distribution of CPUs as among logically-partitioned virtual processors. A virtual processor may yield a CPU to precipitate an occurrence upon which its own execution may be predicated. As such, program code may dispatch the surrendered CPU to a designated virtual processor. | 11-06-2008 |
20080282252 | HETEROGENEOUS RECONFIGURABLE AGENT COMPUTE ENGINE (HRACE) - A computing system ( | 11-13-2008 |
20080282253 | METHOD OF MANAGING RESOURCES WITHIN A SET OF PROCESSES - A workload management system where processes associated with a class have resource management strategies that are specific to that class is provided. The system includes more than one class, with at least one unique algorithm for executing a workload associated with each class. Each algorithm may comprise a strategy for executing a workload that is specific to that class and the algorithms of one class may be completely unrelated to the algorithms of another class. The workload management system allows workloads with different attributes to use system resources in ways that best benefit a workload, while maximizing usage of the system's resources and with minimized degradation to other workloads running concurrently. | 11-13-2008 |
20080288949 | Interprocess Resource-Based Dynamic Scheduling System and Method - A method and system for scheduling tasks in a processing system. In one embodiment, the method comprises processing tasks from a primary work queue, wherein the tasks consume resources that are operable to be released. Whenever the volume of resources that have been consumed exceeds a threshold, the processor executes tasks from a secondary work queue for a period of time. The secondary work queue is comprised of tasks from the primary work queue that can release the resources; the secondary work queue can be sorted according to the volume of resources that can be released. | 11-20-2008 |
20080288950 | Concurrent Management of Adaptive Programs - A method for concurrent management of adaptive programs is disclosed wherein changes in a set of modifiable references are initially identified. A list of uses of the changed references is next computed using records made in structures of the references. The list is next inserted into an elimination queue. Comparison is next made of each of the uses to the other uses to determine independence or dependence thereon. Determined dependent uses are eliminated and the preceding steps are repeated for all determined independent uses until all dependencies have been eliminated. | 11-20-2008 |
20080288951 | Method, Device And System For Allocating A Media Resource - A method and system for allocating a media resource and a device for controlling a media resource. The method for allocating a media resource includes: allocating the media resource processing devices for a resource operation request based on the stored ability information of the various media resource processing devices when the resource operation request is received; and updating the stored ability information of the media resource processing device dynamically. The device for controlling a media resource includes: a memory unit adapted to store the ability information of various media resource processing devices; an allocation unit adapted to allocate media resource processing devices for the resource operation request based on the ability information stored in the memory unit; a dynamic update unit adapted to update the ability information of the media resource processing device stored in the memory unit dynamically. | 11-20-2008 |
20080295106 | METHOD AND SYSTEM FOR IMPROVING THE AVAILABILITY OF A CONSTANT THROUGHPUT SYSTEM DURING A FULL STACK UPDATE | 11-27-2008 |
20080295107 | Adaptive Thread Pool | 11-27-2008 |
20080295108 | Minimizing variations of waiting times of requests for services handled by a processor | 11-27-2008 |
20080295109 | METHOD AND APPARATUS FOR REUSING COMPONENTS OF A COMPONENT-BASED SOFTWARE SYSTEM | 11-27-2008 |
20080301688 | METHOD, SYSTEM, AND PROGRAM PRODUCT FOR ALLOCATING A RESOURCE - The invention provides a method, system, and program product for allocating a resource among a plurality of groups based on the role of each group within an organizational model. A method according to the invention may include, for example, granting a number of groups a privilege to bid on a resource, the privilege being based on a role of each group within an organizational model, accepting a bid for the resource from one or more of the groups, determining whether two or more groups have made equal, highest bids, in such a case, accepting a second bid from the groups having made equal, highest bids, and awarding a right to the resource to the group making the highest bid for the resource. | 12-04-2008 |
20080301689 | DISCRETE, DEPLETING CHIPS FOR OBTAINING DESIRED SERVICE LEVEL CHARACTERISTICS - The present invention provides discrete, depleting chips for allocating computational resources for obtaining desired service level characteristics, wherein discrete chips deplete from a maximum allocated amount but may, in an optional implementation, be allowed to be replenished through the purchase of additional chips. A number of chips are assigned to a requestor/party, known as a business unit (BU), which could be a department, or group providing like-functionality services. In one implementation, the chips themselves could represent base monetary units integrated over time. | 12-04-2008 |
20080301690 | Model-based planning with multi-capacity resources - Systems and methods are described that facilitate performing model-based planning techniques for allocations of multi-capacity resources in a machine. The machine may be, for instance, a printing platform, such as a xerographic machine. According to various features, the multi-capacity resource may be a sheet buffer, and temporal constraints may be utilized to determine whether an insertion point for a new allocation of the sheet buffer is feasible. Multiple insertion points may be evaluated (e.g., serially or in parallel) to facilitate determining an optimal solution for a print job or the like. | 12-04-2008 |
20080301691 | METHOD FOR IMPROVING RUN-TIME EXECUTION OF AN APPLICATION ON A PLATFORM BASED ON APPLICATION METADATA - A method for improving run-time execution of an application on a platform based on application metadata is disclosed. In one embodiment, the method comprises loading a first information in a standardized predetermined format describing characteristics of at least one of the applications. The method further comprises generating the run-time manager, based on the first information, the run-time manager comprising at least two run-time sub-managers, each handling the management of a different resource. The information needed to generate the two run-time sub-managers is at least partially shared. | 12-04-2008 |
20080301692 | FACILITATING ACCESS TO INPUT/OUTPUT RESOURCES VIA AN I/O PARTITION SHARED BY MULTIPLE CONSUMER PARTITIONS - At least one input/output (I/O) firmware partition is provided in a partitioned environment to facilitate access to I/O resources owned by the at least one I/O firmware partition. The I/O resources of an I/O firmware partition are shared by one or more other partitions of the environment, referred to as consumer partitions. The consumer partitions use the I/O firmware partition to access the I/O resources. Since the I/O firmware partitions are responsible for providing access to the I/O resources owned by those partitions, the consumer partitions are relieved of this task, reducing complexity and costs in the consumer partitions. | 12-04-2008 |
20080301693 | BLOCK ALLOCATION TIMES IN A COMPUTER SYSTEM - A method and apparatus improves the block allocation time in a parallel computer system. A pre-load controller pre-loads blocks of hardware in a supercomputer cluster in anticipation of demand from a user application. In the preferred embodiments the pre-load controller determines when to pre-load the compute nodes and the block size to allocate the nodes based on pre-set parameters and previous use of the computer system. Further, in preferred embodiments each block of compute nodes in the parallel computer system has a stored hardware status to indicate whether the block is being pre-loaded, or already has been pre-loaded. In preferred embodiments, the hardware status is stored in a database connected to the computer's control system. In other embodiments, the compute nodes are remote computers in a distributed computer system. | 12-04-2008 |
20080301694 | COMMUNICATION SCHEDULING WITHIN A PARALLEL PROCESSING SYSTEM - Within a data processing system, one or more register files are assigned to respective states of a graph for each of a plurality of clock cycles. A plurality of edges are inserted to form connections between the states of the graph, with respective weights being assigned to each of the edges. A best route through the graph is then determined based, at least in part, on the weights assigned to the edges. | 12-04-2008 |
20080307425 | Data Processing System and Method - A data processing system and method for reallocating resources among execution environments of the system. The reallocation of resources being performed by monitoring the utilization of the resource to determine whether or not the utilization has a predetermined relationship with a utilization measure and thereby unacceptable and based upon this determination reassigning the resource associated with a first execution environment to a second execution environment. The utilization measure is associated with the load of the processor of the utilization of the memory. | 12-11-2008 |
20080307426 | DYNAMIC LOAD MANAGEMENT IN HIGH AVAILABILITY SYSTEMS - Techniques for dynamic load management in processing systems are described. Tuples or vectors, for example, can be used to characterize loads and capacities. Assignments of tasks and redistribution of tasks in the system can be made using the tuples or vectors. | 12-11-2008 |
20080307427 | Methods and apparatus for channel interleaving in OFDM systems - A method and apparatus for channel interleaving in a wireless communication system. In one aspect of the present invention, the data resource elements are assigned to multiple code blocks, and the numbers of data resource elements assigned to each code block are substantially equal. In another aspect of the present invention, a time-domain-multiplexing-first (TDM-first) approach and a frequency-domain-multiplexing-first (FDM-first) approach are proposed. In the TDM-first approach, at least one of a plurality of code blocks are assigned with a number of consecutive data carrying OFDM symbols. In the FDM-first approach, at least one of the plurality of code blocks are assigned with all of the data carrying OFDM symbols. Either one of the TDM first approach and the FDM-first approach may be selected in dependence upon the number of the code blocks, or the transport block size, or the data rate. | 12-11-2008 |
20080313638 | Network Resource Management Device - The present invention introduces a plurality of resource management devices (M | 12-18-2008 |
20080313639 | POLICY BASED SCHEDULING OF SOFTWARE APPLICATIONS - A method and apparatus for using policies to limit resource usage by software applications is disclosed herein. The policies define rules that specify a maximum amount of a resource that a particular application is allowed to use given the current state of the computer system, in one embodiment. The state can be defined based on conditions such as user activity, resource usage, time of day, etc. A scheduler monitors the computer system and the application and enforces the policies to control the resource usage of each application. If the scheduler determines that an application has been using more of a particular resource than is allowed then the scheduler takes some action to reduce resource usage until actual resource usage is at or below allowed resource usage. Each application has its own set of policies associated that allow the application to define rules to limit resource usage, in one embodiment. | 12-18-2008 |
20080313640 | Resource Modeling and Scheduling for Extensible Computing Platforms - Energy management modeling and scheduling techniques are described for reducing the power consumed to execute an application on a multi-processor computing platform within a certain time period. In one embodiment, a sophisticated resource model which accounts for discrete operating modes for computing components/resources on a computing platform and transition costs for transitioning between each of the discrete modes is described. This resource model provides information for a specific heterogeneous multi-processor computing platform and an application being implemented on the platform in a form that can be processed by a selection module, typically utilizing an integer linear programming (ILP) solver or algorithm, to select a task schedule and operating configuration(s) for executing the application within a given time. | 12-18-2008 |
20080313641 | Computer system, method and program for managing volumes of storage system - Provided is a computer system including a host computer, a storage system, and a management computer, in which the storage system receives data I/O request to virtual logical volumes and data I/O request to one or more real logical volumes, each of the virtual logical volumes is allocated to one of one or more pools, storage areas of physical storage systems are allocated to all storage areas defined as the pools, and when a performance problem has occurred in one of the virtual logical volumes, the management computer selects the one of the virtual logical volumes, and selects a pool other than the pool to which the selected virtual logical volume is allocated and the real logical volumes as a migration destination of the selected virtual logical volume, to thereby prevent a performance problem from being caused by interference among the virtual logical volumes sharing the pool. | 12-18-2008 |
20080313642 | SYSTEM AND METHOD FOR ALLOCATING SPARE SYSTEM RESOURCES - A system and method for allocating and/or utilizing spare computing system (e.g., personal computing system) resources. Various aspects of the present invention may, for example and without limitation, provide a system and/or method that communicates incentive information with computing systems, and/or representatives thereof, regarding the allocation of computing resources for utilization by other computing systems and/or incentives that may be associated with such utilization. Various aspects of the present invention may, for example, allocate one or more resources of a computing system for utilization by another computing system based, at least in part, on such communicated incentive information. | 12-18-2008 |
20080313643 | WORKLOAD SCHEDULER WITH CUMULATIVE WEIGHTING INDEXES - A workload scheduler supporting the definition of a cumulative weighting index is proposed. The scheduler maintains ( | 12-18-2008 |
20080320482 | MANAGEMENT OF GRID COMPUTING RESOURCES BASED ON SERVICE LEVEL REQUIREMENTS - Generally speaking, systems, methods and media for management of grid computing resources based on service level requirements are disclosed. Embodiments of a method for scheduling a task on a grid computing system may include updating a job model by determining currently requested tasks and projecting future task submissions and updating a resource model by determining currently available resources and projecting future resource availability. The method may also include updating a financial model based on the job model, resource model, and one or more service level requirements of an SLA associated with the task, where the financial model includes an indication of costs of a task based on the service level requirements. The method may also include scheduling performance of the task based on the updated financial model and determining whether the scheduled performance satisfies the service level requirements of the task and, if not, performing a remedial action. | 12-25-2008 |
20080320483 | RESOURCE MANAGEMENT SYSTEM AND METHOD - Resource management system is provided, implemented between a service bundle developer and provider and a service bundle user. A resource requirement determining device determines a system resource requirement for a service bundle provided by the service bundle developer and provider, and generates resource requirement information corresponding to the service bundle. A processor receives information of system resource utilization status from the service bundle user, determines whether available resource of the service bundle user is sufficient for the resource requirement of the service bundle, when the available resource of the service bundle user is insufficient, the processor generates a waiting queue, and adds the service bundle into the waiting queue. When available resource of the service bundle user is sufficient, the processor installs the service bundle specified in the waiting queue in the service user. A storage device stores the waiting queue and corresponding resource requirement information. | 12-25-2008 |
20080320484 | METHOD AND SYSTEM FOR BALANCING THE LOAD AND COMPUTER RESOURCES AMONG COMPUTERS - A method and system for balancing the load of computer resources among a plurality of computers having consumers consuming the resources is disclosed. After defining the lower threshold of the consumption level of the resources and obtaining the consumption level of the resources for each of the consumers and for each of said computers, the consumption level for each of the computers is compared during a period with its associated lower threshold. Whenever a computer having a consumption level of the resources higher than the lower threshold is identified, a new layout of computer resources for each of the consumers is determined. Consumers are then shifted from their current location in the computer to a corresponding location in another computer according to the layout, so that the consumption level of the resource(s) for a computer may be reduced. | 12-25-2008 |
20080320485 | Logic for Synchronizing Multiple Tasks at Multiple Locations in an Instruction Stream - Logic (also called “synchronizing logic”) in a co-processor (that provides an interface to memory) receives a signal (called a “declaration”) from each of a number of tasks, based on an initial determination of one or more paths (also called “code paths”) in an instruction stream (e.g. originating from a high-level software program or from low-level microcode) that a task is likely to follow. Once a task (also called “disabled” task) declares its lack of a future need to access a shared data, the synchronizing logic allows that shared data to be accessed by other tasks (also called “needy” tasks) that have indicated their need to access the same. Moreover, the synchronizing logic also allows the shared data to be accessed by the other needy tasks on completion of access of the shared data by a current task (assuming the current task was also a needy task). | 12-25-2008 |
20090007125 | Resource Allocation Based on Anticipated Resource Underutilization in a Logically Partitioned Multi-Processor Environment - A method, apparatus and program product for allocating resources in a logically partitioned multiprocessor environment. Resource usage is monitored in a first logical partition in the logically partitioned multiprocessor environment to predict a future underutilization of a resource in the first logical partition. An application executing in a second logical partition in the logically partitioned multiprocessor environment is configured for execution in the second logical partition with an assumption made that at least a portion of the underutilized resource is allocated to the second logical partition during at least a portion of the predicted future underutilization of the resource. | 01-01-2009 |
20090007126 | SWAP CAP RESOURCE CONTROL FOR USE IN VIRTUALIZATION - A method of implementing virtualization involves an improved approach to virtual memory management. An operating system includes a kernel, a resource control framework, a virtual memory subsystem, and a virtualization subsystem. The virtualization subsystem is capable of creating separate environments that logically isolate applications from each other. The virtual memory subsystem utilizes swap space to manage a backing store for anonymous memory. The separate environments share physical resources including swap space. When a separate environment is configured, properties are defined. Configuring a separate environment may include specifying a swap cap that specifies a maximum amount of swap space usable by the separate environment. The resource control framework includes a swap cap resource control. The swap cap resource control is enforced by the kernel such that during operation of the separate environment, the kernel enforces the swap cap specified when the separate environment was configured. | 01-01-2009 |
20090007127 | SYSTEM AND METHOD FOR OPTIMIZING DATA ANALYSIS - There is provided an adaptive semi-synchronous parallel processing system and method, which may be adapted to various data analysis applications such as flow cytometry systems. By identifying the relationship and memory dependencies between tasks that are necessary to complete an analysis, it is possible to significantly reduce the analysis processing time by selectively executing tasks after careful assignment of tasks to one or more processor queues, where the queue assignment is based on an optimal execution strategy. Further strategies are disclosed to address optimal processing once a task undergoes computation by a computational element in a multiprocessor system. Also disclosed is a technique to perform fluorescence compensation to correct spectral overlap between different detectors in a flow cytometry system due to emission characteristics of various fluorescent dyes. | 01-01-2009 |
20090007128 | METHOD AND SYSTEM FOR ORCHESTRATING SYSTEM RESOURCES WITH ENERGY CONSUMPTION MONITORING - A method and system for orchestrating system resources including provisioning process, performance measurement, capacity planning and infrastructure deployment. An integrated solution is provided which could help monitoring the system power consumption and applying corrective rebalancing actions. Such orchestrating and rebalancing activity is performed by the system taking into account the estimated power consumption of the single SW applications. | 01-01-2009 |
20090007129 | Method of allocating resources among client work machines - A method for allocating resources among a plurality of client work machines includes representing at least one client work machine as a resource object, representing at least one manufacturing process executable at a client work machine as a process, defining at least one usage capability for a resource object, selecting one of at least two states of the usage capability, and executing at least one manufacturing process on at least one client work machine according to the selected state of the usage capability. | 01-01-2009 |
20090007130 | IMAGE FORMING APPARATUS, CONTROLLING METHOD, AND CONTROL PROGRAM - An image forming apparatus in which programs for controlling processes that are provided by the image forming apparatus are installed. The image forming apparatus includes means for managing the use amount of each program by use of a counter, means for recognizing the counter which corresponds to the identification information of the program and can manage the use amount of the program, means for correlating the program with the counter recognized by the recognizing means to manage the counter, means which can set an upper limit on the use amount of each program for the use amount managing means, and means for controlling the process by the image forming apparatus based on the upper limit of the use amount set by the setting means for each of the types of the programs. | 01-01-2009 |
20090007131 | Automating the Life Cycle of a Distributed Computing Application - A system for automating the life cycle of a software application is provided. The software application utilizes computing resources distributed over a network. A representative system includes creating logic operable to create a task list which describes how at least one stage in the application life cycle is to be performed, and processing logic responsive to the creating logic, operable to process the task list to perform at least one stage in the application life cycle. The processing logic is integrated with a development environment, and the development environment is used to develop the software application. | 01-01-2009 |
20090007132 | MANAGING PROCESSING RESOURCES IN A DISTRIBUTED COMPUTING ENVIRONMENT - Multiple timing availability chains can be created for individual processing resource in a common pool of resources. Each chain can include a plurality of time intervals each interval having a start time and an end time. Timing availability chains for individual processing resources in the pool of resources can be merged together based on a timing reference to create a pool timing availability chain based on the start times and end times for the intervals. Job plan execution can be simulated based on the pool timing availability chain. The pool chain can be utilized to simulate job execution and based on such simulation a job scheduler can improve the scheduling of jobs on a pool of resources. Other embodiments are also disclosed. | 01-01-2009 |
20090013323 | SYNCHRONISATION - The invention provides a processor comprising an execution unit arranged to execute multiple program threads, each thread comprising a sequence of instructions, and a plurality of synchronisers for synchronising threads. Each synchroniser is operable, in response to execution by the execution unit of one or more synchroniser association instructions, to associate with a group of at least two threads. Each synchroniser is also operable, when thus associated, to synchronise the threads of the group by pausing execution of a thread in the group pending a synchronisation point in another thread of that group. | 01-08-2009 |
20090013324 | COMMUNICATION SYSTEM, INFORMATION PROCESSING SYSTEM, CONNECTION SERVER, PROCESSING SERVER, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND PROGRAM - A connection server ( | 01-08-2009 |
20090013325 | RESOURCE ALLOCATION METHOD, RESOURCE ALLOCATION PROGRAM AND RESOURCE ALLOCATION APPARATUS - A resource allocation method, a resource allocation program, and a resource allocation apparatus in which a request reception server subjects an inputted SQL to a syntax analysis, extracts at least one SQL process from the SQL, calculates a resource cost of a database required by the BES to perform the SQL process for each of process types contained in the SQL process, decides an allocation ratio for allocating the resource of a request executing server to a virtualized server in accordance with a resource cost ratio required by each of the BES to execute the SQL process, and requests for execution of the respective BES on the virtualized server to which the resource has been allocated so as to execute the SQL process. | 01-08-2009 |
20090013326 | A SYSTEM AND METHOD FOR RESOURCE MANAGEMENT AND CONTROL - The present invention relates to complete system and method for centralized management, control and integration of different resources, including normally non-compatible systems. Said resources can be of arbitrary type—people, assets, information systems as well as other resources, including moving objects. The system comprises information systems and hardware enabling the gathering, processing and transmission of initial information from different resources in real-time or possibly later and control of said resources based on predefined or elaborated rules. The invention also allows to store and to use the information related to the location of resources. The present invention being centrally controlled and managed open information system with possibility of resource billing, belongs to the field of universal information systems. | 01-08-2009 |
20090019445 | Deterministic task scheduling in a computing device - Method and system for scheduling tasks in a computing device in a manner that ensures substantially seamless processing of an active job while preventing starvation of background tasks. In one aspect, a method for scheduling tasks in a computing device comprises the steps of statically allocating processor time (P) to a background task class (S) and dynamically allocating processor time (p) to background tasks within the background task class (S) based at least in part on a current count (n) of the background tasks. The background task processor time (p) may equal the background task class processor time (P) divided by the current count (n). The method may further comprise, in each of successive processing periods, assigning a processor to each of the background tasks for their respective background task processor times (P | 01-15-2009 |
20090019446 | DETERMINING A POSSIBLE LOT SIZE - The invention provides methods and apparatus, including computer program products, for of determining a possible lot size of units with respect to a fixed date for a chain of at least two process steps, each process step requiring a respective assigned resource, and consuming a respective time per unit for being performed by the respective assigned resource, where the process steps are sequentially dependent on each other. This is achieved by the following:
| 01-15-2009 |
20090019447 | Adaptive Throttling System for Data Processing Systems - An adaptive throttling system for minimizing the impact of non-production work on production work in a computer system is provided. The adaptive throttling system throttles production work and non-production work to optimize production. The adaptive throttling system allows system administrators to specify a quantified limit on the performance impact of non-production or utility work on production work. The throttling rate of the utility is then automatically determined by a supervisory agent, so that the utilities' impact is kept within the specified limit. The adaptive throttling system adapts dynamically to changes in workloads so as to ensure that valuable system resources are well utilized and utility work is not delayed unnecessarily. | 01-15-2009 |
20090019448 | Cross Process Memory Management - A method for efficiently managing memory resources in a computer system having a graphics processing unit that runs several processes simultaneously on the same computer system includes using threads to communicate that additional memory is needed. If the request indicates that termination will occur then the other processes will reduce their memory usage to a minimum to avoid termination but if the request indicates that the process will not run optimally then the other processes will reduce their memory usage to 1/N where N is the count of the total number of running processes. The apparatus includes a computer system using a graphics processing unit and processes with threads that can communicate directly with other threads and with a shared memory which is part of the operating system memory. | 01-15-2009 |
20090025004 | Scheduling by Growing and Shrinking Resource Allocation - A scheduler for computing resources may periodically analyze running jobs to determine if additional resources may be allocated to the job to help the job finish quicker and may also check if a minimum amount of resources is available to start a waiting job. A job may consist of many tasks that may be defined with parallel or serial relationships between the tasks. At various points during execution, the resource allocation of active jobs may be adjusted to add or remove resources in response to a priority system. A job may be started with a minimum amount of resources and the resources may be increased and decreased over the life of the job. | 01-22-2009 |
20090025005 | RESOURCE ASSIGNMENT SYSTEM - A method and system for assigning resources such as housing associated with an educational institution via communication network is disclosed. A user of a client computer sends a registration request defining registration data to a server facilitating a resource assignment service. The resource assignment service then determines the eligibility of users to use the service based on retrieved registration data, and assigns a randomly generated personal identification number (PIN) to eligible users. The resource assignment service can then assign a timeslot for eligible users to request a desired resource as a function of their assigned PINs. Users may then use the client computer to during their assigned timeslots to submit requests to the resource assignment service for desired resource assignments. | 01-22-2009 |
20090025006 | SYSTEM AND METHOD FOR CONTROLLING RESOURCE REVOCATION IN A MULTI-GUEST COMPUTER SYSTEM - At least one guest system, for example, a virtual machine, is connected to a host system, which includes a system resource such as system machine memory. Each guest system includes a guest operating system (OS). A resource requesting mechanism, preferably a driver, is installed within each guest OS and communicates with a resource scheduler included within the host system. If the host system needs any one the guest systems to relinquish some of the system resource it currently is allocated, then the resource scheduler instructs the driver within that guest system's OS to reserve more of the resource, using the guest OS's own, native resource allocation mechanisms. The driver thus frees this resource for use by the host, since the driver does not itself actually need the requested amount of the resource. The driver in each guest OS thus acts as a hollow “balloon” to “inflate” or “deflate,” that is, reserve more or less of the system resource via the corresponding guest OS. The resource scheduler, however, remains transparent to the guest systems. | 01-22-2009 |
20090031320 | Storage System and Management Method Thereof - A storage system comprises a first storage apparatus having a volume for a host computer, a second storage apparatus connected to the first storage apparatus, and having a volume having a pair relationship with a first volume in the first storage apparatus, and a management apparatus connected to the first storage apparatus and the second storage apparatus. The management apparatus includes a user interface for setting an attribute of a function related to the volume of the first storage apparatus and an attribute of a function related to the volume of the second storage apparatus. The management apparatus compares the attribute of the function related to the first volume and the attribute of the function related to the second volume, and outputs the result of the comparison to the user interface. | 01-29-2009 |
20090037920 | SYSTEM AND METHOD FOR INDICATING USAGE OF SYSTEM RESOURCES USING TASKBAR GRAPHICS - System and method for a method for indicating relative usage of a computer system resource by a plurality of applications each running in an active window, wherein each active window is represented on a taskbar element by a taskbar button, are described. In one embodiment, the method comprises, for each of the active windows, determining a resource usage rate for the application running in the active window, the resource usage rate comprising a percentage of a total system resource usage for which the application accounts; subsequent to the determining, ranking the applications in order of the determined resource usage rates thereof; and redisplaying the taskbar buttons to indicate, via at least one display characteristic, the relative system resource usage rates of the applications. | 02-05-2009 |
20090037921 | TECHNIQUES FOR INSTANTIATING AND CONFIGURING PROJECTS - Techniques for project management instantiation and configuration are provided. A master project includes policy directives that drive the dynamic instantiation and configuration of resources for a project. The resources are instantiated and configured on demand and when resources are actually requested, in response to the policy directives. | 02-05-2009 |
20090037922 | WORKLOAD MANAGEMENT CONTROLLER USING DYNAMIC STATISTICAL CONTROL - A computer system comprises a workload management controller that detects and tracks resource consumption volatility patterns and automatically and dynamically adjusts resource headroom according to the volatility patterns. | 02-05-2009 |
20090037923 | Apparatus and method for detecting resource consumption and preventing workload starvation - In an embodiment of the invention, an apparatus and method for detecting resource consumption and preventing workload starvation, are provided. The apparatus and method perform the acts including: receiving a query; determining if the query will be classified as a resource intense query, based on a number of passes by a cache call over a data blocks set during a time window, where the cache call is associated with the query; and if the query is classified as a resource intense query, then responding to prevent workload starvation | 02-05-2009 |
20090044194 | MULTITHREADED LOCK MANAGEMENT - Apparatus, systems, and methods may operate to construct a memory barrier to protect a thread-specific use counter by serializing parallel instruction execution. If a reader thread is new and a writer thread is not waiting to access data to be read by the reader thread, the thread-specific use counter is created and associated with a read data structure and a write data structure. The thread-specific use counter may be incremented if a writer thread is not waiting. If the writer thread is waiting to access the data after the thread-specific use counter is created, then the thread-specific use counter is decremented without accessing the data by the reader thread. Otherwise, the data is accessed by the reader thread and then the thread-specific use counter is decremented. Additional apparatus, systems, and methods are disclosed. | 02-12-2009 |
20090049448 | Grid Non-Deterministic Job Scheduling - The present invention is method for scheduling jobs in a grid computing environment without having to monitor the state of the resource on the gird comprising a Global Scheduling Program (GSP) and a Local Scheduling Program (LSP). The GSP receives jobs submitted to the grid and distributes the job to the closest resource. The resource then runs the LSP to determine if the resource can execute the job under the conditions specified in the job. The LSP either rejects or accepts the job based on the current state of the resource properties and informs the GSP of the acceptance or rejection. If the job is rejected, the GSP randomly selects another resource to send the job to using a resource table. The resource table contains the state-independent properties of every resource on the grid. | 02-19-2009 |
20090049449 | METHOD AND APPARATUS FOR OPERATING SYSTEM INDEPENDENT RESOURCE ALLOCATION AND CONTROL - An apparatus and method for controlling resources in a computing system including receiving an allocation request for a resource; determining whether an allocation limit for the resource has been reached; and, restricting access to the resource upon determination that the allocation limit has been reached. | 02-19-2009 |
20090055830 | METHOD AND SYSTEM FOR ASSIGNING LOGICAL PARTITIONS TO MULTIPLE SHARED PROCESSOR POOLS - A method and system for assigning logical partitions to multiple named processor pools. Sets of physical processors are assigned to predefined processor sets. Named processor pools with unique pool names are defined. The processor sets are assigned to the named processor pools so that each processor set is assigned to a unique named processor pool. A first set of logical partitions is assigned to a first named processor pool and a second set of logical partitions is assigned to a second named processor pool. A first processor set is assigned to the first named processor pool and a first set of physical processors is assigned to the first processor set. Similarly, a second processor set is assigned to the second named processor pool and a second set of physical processors is assigned to the second processor set. | 02-26-2009 |
20090055831 | Allocating Network Adapter Resources Among Logical Partitions - In an embodiment, a network adapter has a physical port that is multiplexed to multiple logical ports, which have default queues. The adapter also has other queues, which can be allocated to any logical port, and resources, which map tuples to queues. The tuples are derived from data in packets received via the physical port. The adapter determines which queue should receive a packet based on the received tuple and the resources. If the received tuple matches a resource, then the adapter stores the packet to the corresponding queue; otherwise, the adapter stores the packet to the default queue for the logical port specified by the packet. In response to receiving an allocation request from a requesting partition, if no resources are idle, a resource is selected for preemption that is already allocated to a selected partition. The selected resource is then allocated to the requesting partition. | 02-26-2009 |
20090055832 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR EVALUATNG A TEST OF AN ALTERNATIVE SYSTEM - A method for checking an alternative system test, the method includes: determining a relationship between (i) utilization of resources during an execution of a group of programs by a first system when operating in a non-testing mode and (ii) utilization of resources during an executive of an alternative system test by the alternative system; wherein the alternative system test comprises at least on program of the group of programs. | 02-26-2009 |
20090055833 | System and method for performance monitoring - A system for monitoring a computer software system includes a first user actuated tuning knob for allocating space in memory for performance monitoring; a second user actuated tuning knob for a specifying time out value for in-flight units of work; and a transaction monitor responsive to the first and second user actuated tuning knobs for accumulating in synonym chain cells in the allocated space timing statistics for a plurality of in-flight units of work. | 02-26-2009 |
20090055834 | VIRTUALIZATION PLANNING SYSTEM - An interactive virtualization management system provides an assessment of proposed or existing virtualization schemes. A Virtual Technology Overhead Profile (VTOP) is created for each of a variety of configurations of host computer systems and virtualization technologies by measuring the overhead experienced under a variety of conditions. The multi-variate overhead profile corresponding to each target configuration being evaluated is used by the virtualization management system to determine the overhead that is to be expected on the target system, based on the particular set of conditions at the target system. Based on these overhead estimates, and the parameters of the jobs assigned to each virtual machine on each target system, the resultant overall performance of the target system for meeting the performance criteria of each of the jobs in each virtual machine is determined, and over-committed virtual machines and computer systems are identified. | 02-26-2009 |
20090064156 | COMPUTER PROGRAM PRODUCT AND METHOD FOR CAPACITY SIZING VIRTUALIZED ENVIRONMENTS - A computer system determines an optimal hardware system environment for a given set of workloads by allocating functionality from each workload to logical partitions, where each logical partition includes resource demands, assigning a priority weight factor to each resource demand, configuring potential hardware system environments, where each potential hardware system environment provides resource capacities, and computing a weighted sum of least squares metric for each potential hardware system environment. | 03-05-2009 |
20090064157 | ASYNCHRONOUS DATA STRUCTURE PULL APPLICATION PROGRAMMING INTERFACE (API) FOR STREAM SYSTEMS - Provided are techniques for processing data items. A limit on the number of dequeue operations allowed in a current step of processing for a queue-like data structure is set, wherein the number of allowed dequeue operations limit at least one of an amount of CPU resources and an amount of memory resources to be used by an operator. The operator to perform processing is selected and the operator is activated by passing control to the operator, which then dequeues data constrained by the limits set. In response to receiving control back from the operator, the data structure size is examined to determine whether the operator made forward progress in that the operator enqueued or dequeued at least one data item. | 03-05-2009 |
20090064158 | MULTI-CORE RESOURCE UTILIZATION PLANNING - Techniques for multi-core resource utilization planning are provided. An agent is deployed on each core of a multi-core machine. The agents cooperate to perform one or more tests. The tests result in measurements for performance and thermal characteristics of each core and each communication fabric between the cores. The measurements are organized in a resource utilization map and the map is used to make decisions regarding core assignments for resources. | 03-05-2009 |
20090064159 | SYSTEM AND METHOD FOR OPTIMIZING LOAD DISTRIBUTION ACROSS LOGICAL AND PHYSICAL RESOURCES IN A STORAGE SYSTEM - An apparatus, system and method to optimize load distribution across logical and physical resources in a storage system. An apparatus in accordance with the invention may include an availability module and an allocation module. The availability module may dynamically assign values to resources in a hierarchical tree structure. Each value may correspond to an availability parameter such as allocated volumes, current resource utilization, and historic resource utilization. The allocation module may serially process the values and allocate a load to a least busy resource in the hierarchical tree structure based on the assigned values. | 03-05-2009 |
20090064160 | Transparent lazy maintenance of indexes and materialized views - Described herein is a materialized view or index maintenance system that includes a task generator component that receives an indication that an update transaction has committed against a base table in a database system. The task generator component, in response to the update transaction being received, generates a maintenance task for one or more of a materialized view or an index that is affected by the update transaction. A maintenance component transparently performs the maintenance task when a workload of a CPU in the database system is below a threshold or when an indication is received that a query that uses the one or more of the materialized view or the index has been received. | 03-05-2009 |
20090064161 | DEVICE ALLOCATION UTILIZING JOB INFORMATION, STORAGE SYSTEM WITH A SPIN CONTROL FUNCTION, AND COMPUTER THEREOF - This invention provides a storage system coupled to a computer that executes data processing jobs by running a program, comprising: an interface; a storage controller; and disk drives. The storage controller is configured to: control spinning of disk in the disk drives; receive job information which contains an execution order of the job and a load attribute of the job from the computer before the job is executed; select a logical volume to which none of the storage areas are allocated when requested by the computer to provide a logical volume for storing a file that is used temporarily by the job to be executed; select which storage area to allocate to the selected logical volume based on at least one of the job execution order and the job load attribute; allocate the selected storage area to the selected logical volume; and notify the computer of the selected logical volume. | 03-05-2009 |
20090064162 | RESOURCE TRACKING METHOD AND APPARATUS - The present invention is directed to a parallel processing infrastructure, which enables the robust design of task scheduler(s) and communication primitive(s). This is achieved, in one embodiment of the present invention, by decomposing the general problem of exploiting parallelism into three parts. First, an infrastructure is provided to track resources. Second, a method is offered by which to expose the tracking of the aforementioned resources to task scheduler(s) and communication primitive(s). Third, a method is established by which task scheduler(s) in turn may enable and/or disable communication primitive(s). In this manner, an improved parallel processing infrastructure is provided. | 03-05-2009 |
20090064163 | Mechanisms for Creation/Deletion of Linear Block Address Table Entries for Direct I/O - The present invention provides mechanisms that enable application instances to pass block mode storage requests directly to a physical I/O adapter without run-time involvement from the local operating system or hypervisor. In one aspect of the present invention, a mechanism is provided for handling user space creation and deletion operations for creating and deleting allocations of linear block addresses of a physical storage device to application instances. For creation, it is determined if there are sufficient available resources for creation of the allocation. For deletion, it is determined if there are any I/O transactions active on the allocation before performing the deletion. Allocation may be performed only if there are sufficient available resources and deletion may be performed only if there are no active I/O transactions on the allocation being deleted. | 03-05-2009 |
20090070766 | DYNAMIC WORKLOAD BALANCING IN A THREAD POOL - Provided are techniques for workload balancing. A message is received on a channel. A thread in a thread pool is selected to process the message. In response to determining that the message has been processed and a response has been sent on the channel by the thread, it is determined whether a total number of threads in the thread pool is greater than a low water mark plus one and whether the channel has more than a maximum number of threads blocked on a receive, wherein the low water mark represents a minimum number of threads in the thread pool. In response to determining that a number of threads in the thread pool is greater than the low water mark plus one and that the channel has more than the maximum number of threads blocked on a receive, the thread is terminated. In response to determining at least one of the number of threads in the thread pool is less than or equal to the low water mark plus one and the channel has less than or equal to the maximum number of threads blocked on a receive, the thread is retained. | 03-12-2009 |
20090070767 | Determining Desired Job Plan Based on Previous Inquiries in a Stream Processing Framework - A data stream processing system is provided that utilizes independent sites to process user-defined inquires over dynamic, continuous streams of data. A mechanism is provided for processing these inquiries over the continuous streams of data by matching new inquiries to previously submitted inquiries. The job plans containing sets of processing elements that were created for both the new inquiry and the previous inquiries are compared for consistency in input and output formatting and commonality of processing elements used. In accordance with the comparison, the new job plan, previous job plans or a combination of the new and previous job plans are used to process the new inquiry. Based on the results of processing the new inquiry, a determination is made regarding which job plans are used for future inquiries. | 03-12-2009 |
20090070768 | System and Method for Using Resource Pools and Instruction Pools for Processor Design Verification and Validation - A system and method for using resource pools and instruction pools for processor design verification and validation is presented. A test case generator organizes processor resources into resource pools using a resource pool mask. Next, the test case generator separates instructions into instruction pools based upon the resources that each instruction requires. The test case generator then creates a test case using one or more sub test cases by assigning a resource pool to each sub test case, identifying instruction pools that correspond the assigned test case, and building each sub test case using instructions included in the identified instruction pools. | 03-12-2009 |
20090070769 | PROCESSING SYSTEM HAVING RESOURCE PARTITIONING - A processing system includes a resource that is accessible by a processor and resource partitioning software executable by the processor. The resource partitioning software may be executed to establish a resource partition for the resource. The resource partition defines a set of rules that are used to control access to the resource when a request for the resource is received from a software application and/or process. | 03-12-2009 |
20090070770 | Ordering Provisioning Request Execution Based on Service Level Agreement and Customer Entitlement - A solution provided here comprises receiving requests for a service from a plurality of customers, responding to the requests for a service, utilizing a shared infrastructure, and configuring the shared infrastructure, based on stored customer information. Another example of such a solution comprises:
| 03-12-2009 |
20090077561 | Pipeline Processing Method and Apparatus in a Multi-processor Environment - A pipelining processing method and apparatus in multi-processor environment partitions a task into overlapping sub-tasks that are to be allocated to multiple processors, overlapping portions among the respective sub-tasks being shared by the processors that process corresponding sub-tasks. A status of each of the processors is determined during a process where each of the processors executes sub-tasks and the overlapping portions among the respective sub-tasks to be executed by which processor among the processors is dynamically determined on the basis of the status of each of the processors. | 03-19-2009 |
20090083747 | METHOD FOR MANAGING APPLICATION PROGRAMS BY UTILIZING REDUNDANCY AND LOAD BALANCE - A method for managing application programs includes: monitoring whether there is at least an application program which is unresponsive in a plurality of started application programs; and automatically restarting the application program which is unresponsive, and averagely allocating a system resource for the plurality of application programs according to a number of the plurality of application programs. | 03-26-2009 |
20090083748 | PROGRAM EXECUTION DEVICE - A resource information acquiring unit acquires processor resource information from outside. A program associating unit associates the processor resource information with a program. A processor resource allocating unit allocates processor resources to the program in accordance with the processor resource information when the program is executed. | 03-26-2009 |
20090083749 | RESTRICTING RESOURCES CONSUMED BY GHOST AGENTS - One aspect of the present invention can include a method for restricting resources consumed by ghost agents. The method can include the step of associating a ghost agent with a host. A resource utilization value can be ascertained for the ghost agent and the host combined. The ascertained resource utilization value can be compared with a usage threshold. A determination can be made as to whether operations of the ghost agent are to be executed based upon the previous comparison. | 03-26-2009 |
20090083750 | METHOD AND APPARATUS FOR CONTROLLING MESSAGE TRAFFIC LICENSE - The present invention relates to a method and an apparatus for controlling message traffic licenses. The method includes: controlling message traffic through an ordinary license; judging whether the triggering conditions of using the first extended license are fulfilled, and, if the triggering conditions are fulfilled, using the first extended license to control the message traffic. The apparatus includes: a license management module, adapted to switch between the licenses according to the triggering conditions of the message traffic license; and a control module, adapted to control the message traffic by using the license selected by the license management module. The method and the apparatus for controlling message traffic licenses provided in an embodiment of the present invention perform hierarchical control on the short message traffic to overcome waste of system resources in the prior art caused by unitary setting of the maximum traffic and reduce the system resources occupied by invalid license traffic in the Short Message Service Center (SMSC). | 03-26-2009 |
20090089787 | Method and System for Migrating Critical Resources Within Computer Systems - A method and system for migrating at least one critical resource during a migration of an operative portion of a computer system are disclosed. In at least some embodiments, the method includes (a) sending first information constituting a substantial copy of a first of the at least one critical resource via at least one intermediary between a source component and a destination component. The method further includes (b) transitioning a status of the destination component from being incapable of receiving requests to being capable of receiving requests, and (c) re-programming an abstraction block to include modified addresses so that at least one incoming request signal is forwarded to the destination component rather than to the source component. | 04-02-2009 |
20090089788 | SYSTEM AND METHOD FOR HANDLING RESOURCE CONTENTION - In one aspect, the invention is directed to a method by which a user of a functional resource in a software environment can determine whether any other users are waiting to acquire control of the functional resource. The functional resource has associated therewith, a placeholder resource that is a placeholder for users waiting to acquire control of the functional resource. The method includes inquiring by the user of the functional resource whether the placeholder resource is available for exclusive control by the user of the functional resource. If the placeholder resource is available for exclusive control, then no other users are waiting for control of the functional resource and so the current user can keep control of it. If, however, the placeholder resource is not available, that indicates to the user of the functional resource that at least one other user is waiting for control of the functional resource and so the user of the functional resource may release control of the functional resource. | 04-02-2009 |
20090089789 | Method to allocate inter-dependent resources by a set of participants - The object of the present invention is a method that allows a group of independent participants to coordinate decisions with respect to the allocation of interdependent resources, while maintaining certain privacy guarantees. | 04-02-2009 |
20090089790 | METHOD AND SYSTEM FOR COORDINATING HYPERVISOR SCHEDULING - A method for executing an application on a plurality of nodes, that includes synchronizing a first clock of a first node of the plurality of nodes and a second clock of a second node of the plurality of nodes, configuring a first hypervisor on the first node to execute a first application domain and a first privileged domain, wherein configuring the hypervisor comprises allocating a first number of cycles of the first clock to the first privileged domain, configuring a second hypervisor on the second node to execute a second application domain and a second privileged domain, wherein configuring the second hypervisor that includes allocating the first number of cycles of the first clock to the second privileged domain, and executing the application in the first application domain and the second application domain, wherein the first application domain and the second application domain execute semi-synchronously and the first privileged domain and the second privileged domain execute semi-synchronously. | 04-02-2009 |
20090089791 | RESOURCE ALLOCATION UNIT QUEUE - Provided is a system, deployment and program for resource allocation unit queuing in which an allocation unit associated with a task is classified. An allocation unit freed as the task ends is queued for use by another task in a queue at a selected location within the queue in accordance with the classification of said allocation unit. In one embodiment, an allocation unit is queued at a first end of the queue if classified in a first class and is queued at a second end of the queue if classified in said second class. Other embodiments are described and claimed. | 04-02-2009 |
20090094608 | METHOD AND APPARATUS FOR MOVING A SOFTWARE APPLICATION IN A SYSTEM CLUSTER - In one aspect, the invention is directed to a method for shutting down a first instance of an application and starting up a second instance of the application. The first instance of the application has associated therewith at least one first-instance support resource. The second instance of the application has associated therewith at least one second-instance support resource. The method includes:
| 04-09-2009 |
20090094609 | DYNAMICALLY PROVIDING A LOCALIZED USER INTERFACE LANGUAGE RESOURCE - Technologies are described herein for dynamically providing a localized user interface (“UI”) resource. A localization framework includes a resource manager, resource sets, and resource readers. The resource manager exposes an application programming interface (“API”) to application programs for requesting a localized UI resource from the resource manager. When the resource manager receives a request for a localized UI resource on the API, the resource manager queries the resource sets for the requested resource. If the first resource set is unable to provide the requested localized UI resource, another resource set may be queried. Multiple resource readers within each resource set may also be configured to provide flexibility in how UI resources are loaded and processed. | 04-09-2009 |
20090100435 | HIERARCHICAL RESERVATION RESOURCE SCHEDULING INFRASTRUCTURE - Scheduling system resources. A system resource scheduling policy for scheduling operations within a workload is accessed. The policy is specified on a workload basis such that the policy is specific to the workload. System resources are reserved for the workload as specified by the policy. Reservations may be hierarchical in nature where workloads are also hierarchically arranged. Further, dispatching mechanisms for dispatching workloads to system resources may be implemented independent from policies. Feedback regarding system resource use may be used to determine policy selection for controlling dispatch mechanisms. | 04-16-2009 |
20090100436 | PARTITIONING SYSTEM INCLUDING A GENERIC PARTITIONING MANAGER FOR PARTITIONING RESOURCES - The application discloses a generic partitioning manager for partitioning resources across one or more owner nodes. In illustrated embodiments described, the partitioning manager interfaces with the one or more owner nodes through an owner library. A lookup node or application interfaces with the partitioning manager through the lookup library to lookup address or locations of the partitioned resources. In illustrated embodiments, resources are partitioned via the partitioning manager in response to lease request messages from an owner library. In illustrated embodiments, the lease grant message includes a complete list of the leases for the owner node. | 04-16-2009 |
20090106763 | ASSOCIATING JOBS WITH RESOURCE SUBSETS IN A JOB SCHEDULER - A method, information processing system, and computer program storage product for associating jobs with resource subsets in a job scheduler. At least one job class that defines characteristics associated with a type of job is received. A list of resource identifiers for a set of resources associated with the job class is received. A set of resources available on at least one information processing system is received. The resource identifiers are compared with each resource in the set of resources available on the information processing system. A job associated with the job class with is scheduled with a set of resources determined to be usable by the job based on the comparing. | 04-23-2009 |
20090106764 | Support for globalization in test automation - Various technologies and techniques are disclosed for supporting globalization in user interface automation. A resource key is provided that contains at least three data elements. A resource type data element contains data representing a resource type, a resource location data element contains data representing a location to a resource file, and a resource identifier data element contains data-representing a resource identifier. During a resource file extraction operation, the resource location data element is used to locate the resource file, and the resource type data element and the resource identifier data element are used to locate a resource within the resource file that matches the resource type and the resource identifier. A process is provided for resolving a full path name to a resource file. A process is provided for performing a post-extraction action on an extracted resource string. | 04-23-2009 |
20090106765 | Predetermination and propagation of resources in a distributed software build - Various technologies and techniques are disclosed for propagating resources during a distributed build process. Subscription of interest is registered in resources needed during a distributed build process. Build data is analyzed to determine what resources will be needed. The subscriptions of interest are stored in a data store that is accessible by all build machines participating in the distributed build process. A status of subscriptions of interest is monitored in the data store. When the status of respective subscriptions of interest indicates that a publication notice was registered for a respective resource, the respective resource is retrieved from a machine that contains the resource. When a new resource is created that is needed by other build machines, a publication notification is registered with the data store so the other build machines can determine that the new resource is now available. | 04-23-2009 |
20090106766 | STORAGE ACCESS DEVICE - A storage access device, which issues an I/O request (input/output request) to a logical unit provided by one or more storage systems, holds association information showing that a plurality of logical volumes corresponding to a plurality of logical units, which belong to the same copy-set, are associated. In the storage access device, the respective associated logical volumes shown by this association information are allocated to a virtual device, and the virtual device is provided to an application. | 04-23-2009 |
20090113440 | Multiple Queue Resource Manager - In one embodiment, a multiple queue resource manage includes a number of queues in communication with at least one thread. The queues are coupled to each of a corresponding number of clients and operable to receive messages from its respective client. The at least one thread is coupled to a processor configured in a computing system and operable to alternatively process a specified quantity of the messages from each of the plurality of queues. | 04-30-2009 |
20090113441 | Registering a resource that delegates commit voting - A computer system and storage medium that, in an embodiment, receive an allocation request for a resource and registers the resource as a non-voting participant if the resource desires to delegate commit voting to another resource. The registered resource is then prohibited from participating in an enclosing transactional context and instead is informed when the transaction completes. The resource is enlisted as a voting participant if the resource does not desire to delegate commit voting. In this way, when multiple resources are used in a transaction, a resource may be registered and receive notifications of transaction completion instead of being enlisted and voting on commit decisions. The result of a transaction in which a single resource takes responsibility for a number of other resources is that transaction completion avoids the two-phase commit protocol and the resulting performance degradation. | 04-30-2009 |
20090119672 | Delegation Metasystem for Composite Services - A delegation metasystem for composite services is described, where a composite service is a service which calls other services during its operation. In an embodiment, the composite service is defined using generic descriptions for any services (and their access control models) which may be called by the composite service during operation. At run time, these generic descriptions and potentially other factors, such as the user of the composite service, are used to select actual available services which may be called by the composite service and access rights for the selected services are delegated to the composite service. These access rights may subsequently be revoked when the composite service terminates. | 05-07-2009 |
20090119673 | Predicting and managing resource allocation according to service level agreements - Allocating computing resources comprises allocating an amount of a resource to an application program based on an established service level requirement for utilization of the resource by the application program, determining whether the application program's utilization of the resource exceeds a utilization threshold, and changing the allocated amount of the resource in response to a determination that the application program's utilization of the resource exceeds the utilization threshold. The utilization threshold is based on the established service level requirement and is different than the established service level requirement. Changing the allocation of the resource based on the utilization threshold allows allocating sufficient resources to the application program prior to a breach of a service level agreement for the application program. | 05-07-2009 |
20090125910 | SYSTEM-GENERATED RESOURCE MANAGEMENT PROFILES - A method for computer control of collaborating devices enables automatic generation of resource management profiles to coordinate resource allocation within the collaborating device. The method includes utilization of a graphical user interface to select an initial resource management profile and instruct the device to automatically generate a resource profile. Timing is specified for creation of the automatically generated optimized resource profile. The optimized resource profile is developed from statistics maintained, collected, and interpreted about the demand for resources within each component of the collaborating device. An operator may elect to automatically invoke the most recently generated optimized profile after a specified period of collaborating device idleness or to invoke it upon an instruction from the operator. The optimized resource profile may be saved for future use or discarded. | 05-14-2009 |
20090125911 | RESOURCE MANAGEMENT PROFILES - A resource management graphical user interface for a computer-controlled printing system in a networked environment enables an operator to create, modify, and apply resource management profiles to coordinate resource allocation within the printing system. The user interface displays a current resource management profile, which includes printing system resource allocations associated with specific tasks. A resource profile list includes at least one profile name, corresponding to a task type. Profiles associated with the task type are presented and controls are provided to enable the operator to set allocations for component resource usage. The operator is also presented with operational options, including deleting a profile, approving a profile, applying a profile to a print job or series of print jobs, saving a new profile, replacing an existing profile, and canceling a profile modification. The user interface transmits instructions to apply a profile to a printing system for processing of print jobs. | 05-14-2009 |
20090133028 | SYSTEM AND METHOD FOR MANAGEMENT OF AN IOV ADAPTER THROUGH A VIRTUAL INTERMEDIARY IN A HYPERVISOR WITH FUNCTIONAL MANAGEMENT IN AN IOV MANAGEMENT PARTITION - A system and method which provide a mechanism for an I/O virtualization management partition (IMP) to control the shared functionality of an I/O virtualization (IOV) enabled I/O adapter (IOA) through a physical function (PF) of the IOA while the virtual functions (VFs) are assigned to client partitions for normal I/O operations directly. A hypervisor provides device-independent facilities to the code running in the IMP and client partitions. The IMP may include device specific code without the hypervisor needing to sacrifice its size, robustness, and upgradeability. The hypervisor provides the virtual intermediary functionally for the sharing and control of the IOA's control functions. | 05-21-2009 |
20090133029 | METHODS AND SYSTEMS FOR TRANSPARENT STATEFUL PREEMPTION OF SOFTWARE SYSTEM - Methods and systems for preemption of software in a computing system that include receiving a preempt request for a process in execution using a set of resources, pausing the execution of the process; and releasing the resources to a shared pool. | 05-21-2009 |
20090133030 | SYSTEM FOR ON DEMAND TASK OPTIMIZATION - An apparatus and program product determine information indicative of a performance differential between operation of a computer with the standby resource activated and operation of the computer with the standby resource inactivated. The information is communicated to a user. The standby resource may be activated in response to the determination. | 05-21-2009 |
20090138881 | Prevention of Deadlock in a Distributed Computing Environment - A method for preventing deadlock in a distributed computing system includes the steps of: receiving as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; populating at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; storing within each container at least a portion of the table; and allocating one or more threads in a given container according to at least a portion of the table stored within the given container. | 05-28-2009 |
20090138882 | Prevention of Deadlock in a Distributed Computing Environment - A system for preventing deadlock in a distributed computing system includes a memory and at least one processor coupled to the memory. The processor is operative: to receive as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; to populate at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; to store within each container at least a portion of the at least one table; and to allocate one or more threads in a given container according to at least a portion of the at least one table stored within the given container. | 05-28-2009 |
20090138883 | METHOD AND SYSTEM OF MANAGING RESOURCES FOR ON-DEMAND COMPUTING - A method and system of managing resources for on-demand computing is provided. The system can include one or more pools having resources, and a provisioning manager in communication with the one or more pools. The provisioning manager can receive a request for a resource from the requestor and can obtain values for one or more categories associated with the resources. The values can be obtained for at least a portion of the resources. The one or more categories can be based on quantifiable properties associated with the resources. The provisioning manager can determine a priority score for each of the at least a portion of the resources. The provisioning manager can determine a resource from the at least a portion of the resources to be distributed to the requester, where the determination can be based at least in part on the priority score for the resource. | 05-28-2009 |
20090138884 | STORAGE MANAGEMENT SYSTEM, A METHOD OF MONITORING PERFORMANCE AND A MANAGEMENT SERVER - A storage management system provides a capability of properly setting a performance monitoring threshold and monitoring a performance of a storage resource in the SAN environment with respect to the operation process being executed. The storage management system includes a management server, a storage device, a storage network, and a management server. The management server is arranged to have a performance information collecting unit for collecting the current performance value of a storage resource, a composition section determining unit for determining a composition section corresponding with a composition ratio of the operation processes, a threshold information storage unit for storing a performance monitoring threshold corresponding with the composition section with respect to one or more storage devices, and a performance determining unit for determining a performance of the storage resource based on the current performance value and the performance monitoring threshold. | 05-28-2009 |
20090138885 | Prevention of Deadlock in a Distributed Computing Environment - A method for preventing deadlock in a distributed computing system includes the steps of: receiving as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; populating at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; storing within each container at least a portion of the table; and allocating one or more threads in a given container according to at least a portion of the table stored within the given container. | 05-28-2009 |
20090138886 | Prevention of Deadlock in a Distributed Computing Environment - A system for preventing deadlock in a distributed computing system includes a memory and at least one processor coupled to the memory. The processor is operative: to receive as input a sorted set of containers defining a unique global sequence of containers for servicing process requests; to populate at least one table based at least in part on off-line analysis of call graphs defining corresponding transactions for a given order of the containers in the sorted set; to store within each container at least a portion of the at least one table; and to allocate one or more threads in a given container according to at least a portion of the at least one table stored within the given container. | 05-28-2009 |
20090138887 | Virtual machine monitor and multiprocessor sysyem - In order to provide an interface of acquiring physical position information of an I/O device on a virtual machine monitor having an exclusive allocation function of the I/O device and optimize allocation of a resource to a virtual server by using the acquired physical position information, a virtual machine monitor includes an interface of allocating a resource in accordance with a given policy (a parameter of determining to which a priority is given in distributing resources) for an I/O device, a CPU NO., and a memory amount request to guest OS. Further, the virtual machine monitor includes an interface of pertinently converting physical position information of the resource allocated by the virtual machine monitor to notice to guest OS. | 05-28-2009 |
20090138888 | Generating Governing Metrics For Resource Provisioning - In a method of generating governing metrics, a high-level goal to be met in a provisioned system is identified. In addition, a low-level governing policy designed to facilitate achievement of the high-level goal is selected and properties relating to the selected low-level governing policy are identified. The identified properties are formulated to define governing metrics relevant to the selected low-level governing policy and the formulated governing metrics are outputted. The formulated governing metrics are configured to be used in at least one of evaluating and controlling resource provisioning in the provisioned system. | 05-28-2009 |
20090144741 | RESOURCE ALLOCATING METHOD, RESOURCE ALLOCATION PROGRAM, AND OPERATION MANAGING APPARATUS - An operation managing apparatus totalizes necessary resource amount information every the service so as to acquire necessary resource amount information every BP, and identifies the necessary resource amount information every the BP with resource amount information which can be utilized with respect to each of the service executing apparatuses so as to retrieve such service executing apparatuses capable of providing resource amounts by which the necessary resource amount information every the BP is stored. When the service executing apparatuses are retrieved, the operation managing apparatus allocates a service to the retrieved service executing apparatuses, whereas when the service executing apparatuses are not retrieved, the operation managing apparatus allocates the services to plural sets of the service executing apparatuses. | 06-04-2009 |
20090144742 | METHOD, SYSTEM AND COMPUTER PROGRAM TO OPTIMIZE DETERMINISTIC EVENT RECORD AND REPLAY - A method, system and computer-usable medium for managing task events during the scheduling period of a task executing on one of the CPUs of a multi-processor computer. Only events of specific portions of scheduling period are logged, wherein a first shared resource access has been granted for the task, this portion of scheduling period gathering all the non-deterministic events which cannot be replayed by simple task re-execution. Other independent non-deterministic event records are still logged as usual when they occur out of the portion of scheduling period for which a record has been created. This limits the number of logged events during recording session of an application and the frequency of events to transmit from the production machine to the replay machine. | 06-04-2009 |
20090150893 | HARDWARE UTILIZATION-AWARE THREAD MANAGEMENT IN MULTITHREADED COMPUTER SYSTEMS - A device, system, and method are directed towards managing threads in a computer system with one or more processing units, each processing unit having a corresponding hardware resource. Threads are characterized based on their use or requirements for access to the hardware resource. The threads are distributed among the processing units in a configuration that leaves at least one processing unit with threads that have an aggregate zero or low usage of the hardware resource. Power may be reduced or turned off to the instances of the hardware resource that have zero or low usage. Distribution may be based on one or more of a number of specifications or factors, such as user power management specifications, power usage, performance, and other factors. | 06-11-2009 |
20090150894 | Nonvolatile memory (NVM) based solid-state disk (SSD) system for scaling and quality of service (QoS) by parallelizing command execution - A method for scaling a SSD system which includes providing at least one storage interface and providing a flexible association between storage commands and a plurality of processing entities via the plurality of nonvolatile memory access channels. Each storage interface associates a plurality of nonvolatile memory access channels. | 06-11-2009 |
20090150895 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SUPPORTING TRANSFORMATION TO A SHARED ON-DEMAND INFRASTRUCTURE - Systems, methods and computer program products for supporting transformation to a shared on-demand infrastructure. Exemplary embodiments include a method including identifying a CPU resource type (or, in general, other sharable resource) to analyze, calculating a number of servers in scope, Ns, collecting current resource usage data for systems in the scope, wherein the current resource data is provided by systems and performance management tools, identifying a Period P, counting a number of peaks (Np) in the Period, excluding adjacent spikes to each of the number of peaks, calculating an average of CPU usage, Um, which is generally provided by the usage collection tools, defining an amplitude Am, defining a value for % Ks, in the range of 0.2-0.3 (value suggested) and applying transformation formulas to obtain a minimum size of a resource pool, a size of a target environment and a resource saving. | 06-11-2009 |
20090150896 | POWER CONTROL METHOD FOR VIRTUAL MACHINE AND VIRTUAL COMPUTER SYSTEM - Provided is a method of controlling a virtual computer system in which a physical computer includes a plurality of physical CPUs that is switchable between a sleep state and a normal state, and a virtualization control unit divides the physical computer into a plurality of logical partitions to run a guest OS in each of the logical partitions and controls allocation of resources of the physical computer to the logical partitions, causes the virtualization control unit to: receive an operation instruction for operating the logical partitions; and if the operation instruction is for deleting a virtual CPU from one of the logical partitions, delete this virtual CPU from a table for managing virtual CPU-physical CPU allocation and put, if the deleting leaves no virtual CPUs allocated to one of the physical CPUs that has been allocated the deleted virtual CPU, this one of the physical CPUs into the sleep state. | 06-11-2009 |
20090150897 | MANAGING OPERATION REQUESTS USING DIFFERENT RESOURCES - Provided are a system and program for managing operation requests using different resources. In one embodiment, a first queue is provided for operations which utilize a first resource of a first and second resource. A second queue is provided for operations which utilize the second resource. An operation is queued on the first queue until the first resource is acquired. The first resource is released if the second resource is not also acquired. The operation is queued on the second queue when the first resource is acquired but the second resource is not. In addition, the first resource is released until the operation acquires both the first resource and the second resource. | 06-11-2009 |
20090158289 | WORKFLOW EXECUTION PLANS THROUGH COMPLETION CONDITION CRITICAL PATH ANALYSIS - Optimizing workflow execution. A method includes identifying a completion condition. The completion condition is specified as part of the overall workflow. The method further includes identifying a number of activities that could be executed to satisfy the completion condition. One or more activities from the number of activities is ordered into an execution plan and assigned system resources based on an analysis of activities in the number of activities and the completion condition. | 06-18-2009 |
20090158290 | System and Method for Load-Balancing in Server Farms - A system and method for receiving a server request, determining whether one of a plurality of servers scheduled to receive the server request is available, wherein the availability of the one of the servers scheduled to receive the request is based on a first stored value and a second stored value, incrementing the second stored value by a predetermined amount when the one of the servers is unavailable and directing the server request to another one of the plurality of servers based on the first and second stored values. | 06-18-2009 |
20090158291 | METHOD FOR ASSIGNING RESOURCE OF UNITED SYSTEM - A method of assigning a resource of a united system in which a plurality of single systems are complexly operated includes: determining a multi-user diversity order based on the quantity of users existing within the system; determining a cost function using the determined multi-user diversity order; and assigning a resource based on the determined cost function. Therefore, a state of each system and user requirements can be fully reflected and a resource can be efficiently managed within a united system in which several systems are complexly operated. | 06-18-2009 |
20090165010 | Method and system for optimizing utilization of resources - A method, application tool and computer program product for the optimal utilization of the resources in an organization. The organization has various processes. Each process includes an allocated number of resources. However, with the variation in the workload in a process, there may be under- or over-utilization of resources. Therefore, cross-utilization of resources across the different processes may result in the optimal utilization of resources in the organization. | 06-25-2009 |
20090165011 | RESOURCE MANAGEMENT METHOD, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND PROGRAM - In an information processing system, a configuration information management apparatus stores an identifier of a resource of a management target apparatus and a resource address of the management target resource, in association with each other. The management target apparatus stores destination information and the identifier of the management target resource, in association with each other, the destination information used when the configuration information management apparatus receives event notification from the management target resource. The management target apparatus transmits the identifier of the resource and a current (the latest) resource address when transmitting the event notification of the management target resource to the configuration information managing apparatus. The configuration information managing apparatus receives the event notification, the identifier of the resource, and the current resource address and changes the stored resource address which is associated with the acquired identifier of the resource into the acquired current resource address. | 06-25-2009 |
20090172687 | MANAGEMENT OF COMPUTER EVENTS IN A COMPUTER ENVIRONMENT - The scope and impact of an event, such as a failure, are identified. A Containment Region is used to identify the resources affected by the event. It is also used to aggregate resource state for those resources. This information is then used to manage one or more aspects of a customer's environment. This management may include recovery from a failure. | 07-02-2009 |
20090172688 | MANAGING EXECUTION WITHIN A COMPUTING ENVIRONMENT - The projected effect of executing a proposed action on the computing environment is determined. Based on the projected effect, programmatic enforcement of whether the action is allowed to execute or not is provided. The action is selected based on the current status of the environment. | 07-02-2009 |
20090172689 | ADAPTIVE BUSINESS RESILIENCY COMPUTER SYSTEM FOR INFORMATION TECHNOLOGY ENVIRONMENTS - Programmatically adapting an Information Technology (IT) environment to changes associated with business applications of the IT environment. The programmatically adapting is performed in the context of the business application. The changes can reflect changes in the IT environment, changes to the business application, changes to the business environment and/or failures within the environment, as examples. | 07-02-2009 |
20090172690 | System and Method for supporting metered clients with manycore - In some embodiments, the invention involves partitioning resources of a manycore platform for simultaneous use by multiple clients, or adding/reducing capacity to a single client. Cores and resources are activated and assigned to a client environment by reprogramming the cores' route tables and source address decoders. Memory and I/O devices are partitioned and securely assigned to a core and/or a client environment. Instructions regarding allocation or reallocation of resources is received by an out-of-band processor having privileges to reprogram the chipsets and cores. Other embodiments are described and claimed. | 07-02-2009 |
20090172691 | STREAMING OPERATIONS FOR WORKFLOW PROCESS MODELS - A buffer may be configured to store a plurality of items, and to be accessed by one or more activities of an instance of a process model. A scheduler may be configured to schedule execution of each of a plurality of activities of the process model, and to determine an activation of an activity of the plurality of activities. The scheduler may include an activity manager configured to access an activity profile of the activity upon the determining of the activation, the activity profile including buffer access characteristics according to which the activity is designed to access the buffer. A process execution unit may be configured to execute the activity and may include a buffer access manager configured to access the buffer according to the buffer access characteristics of the activity profile, and to thereby facilitate an exchange of at least one item between the buffer and the activity. | 07-02-2009 |
20090172692 | Enterprise Resource Planning with Asynchronous Notifications of Background Processing Events - Methods, systems, and computer program products for operating an enterprise resource planning system. The method includes running a placeholder job in said enterprise resource planning system in response to a request from at least one client application for notification of at least one background processing event, wherein the placeholder job is executed in response to the at least one background processing event. | 07-02-2009 |
20090178046 | Methods and Apparatus for Resource Allocation in Partial Fault Tolerant Applications - Techniques are disclosed for allocation of resources in a distributed computing system. For example, a method for allocating a set of one or more components of an application to a set of one or more resource groups includes the following steps performed by a computer system. The set of one or more resource groups is ordered based on respective failure measures and resource capacities associated with the one or more resource groups. An importance value is assigned to each of the one or more components, wherein the importance value is associated with an affect of the component on an output of the application. The one or more components are assigned to the one or more resource groups based on the importance value of each component and the respective failure measures and resource capacities associated with the one or more resource groups, wherein components with higher importance values are assigned to resource groups with lower failure measures and higher resource capacities. The application may be a partial fault tolerant (PFT) application that comprises a set of one or more PFT application components. The set of one or more resource groups may comprise a heterogeneous set of resource groups (or clusters). | 07-09-2009 |
20090178047 | DISTRIBUTED ONLINE OPTIMIZATION FOR LATENCY ASSIGNMENT AND SLICING - A system and method for latency assignment in a system having shared resources for performing jobs including computing a new resource price at each resource and sending the new resource price to a task controller in a task path that has at least one job running in the task path. A path price is computed for each task path of the task controller, if there is a critical time specified for the task. New deadlines are determined for the resources in a task path based on the resource price and the path price. The new deadlines are sent to the resources where the at least one job is running to improve system performance. | 07-09-2009 |
20090178048 | SYSTEM AND METHOD FOR COMPOSITION OF STREAM PROCESSING SERVICE ENVIRONMENTS - A system and method for composing a stream servicing environment which considers all stakeholders includes identifying service component requirements needed for processing a data stream, and determining available service elements for processing the stream. Feasible service environments are constructed based upon the available service elements and the service component requirements. Efficiency measures are computed for each feasible service environment considering all stakeholders. A best service environment is determined based upon the efficiency measures. | 07-09-2009 |
20090178049 | Multi-Element Processor Resource Sharing Among Logical Partitions - A method, apparatus, and program product to allocate processor resources to a plurality of logical partitions in a computing device including a plurality of processors, each processor having at least one general purpose processing element and a plurality of synergistic processing elements. General purpose processing element resources and synergistic processing element resources are separately allocated to each logical partition. The synergistic processing element resources to each logical partition are allocated such that each synergistic processing element is assigned to a logical partition exclusively. At least one virtual processor is allocated for each logical partition. The at least one virtual processor may be allocated virtual general purpose processing element resources and virtual synergistic processing element resources that correspond to the general purpose processing element resources and synergistic processing element resources allocated to the logical partition. | 07-09-2009 |
20090178050 | Control of Access to Services and/or Resources of a Data Processing System - In order to control access to resources of a data processing system, a priority code is determined for an access request to at least one resource. A comparison code for granting access to the at least one requested resource is determined concerning an alternative use of the resource. For a totality of resource requests to the data processing system, an extreme value for a sum is determined via products of a corresponding priority code and of a number of resource accesses which can be granted in each case, taking into account a maximum capability of a requested resource. For a resource request, it is checked whether the priority code and the comparison code show a predetermined mutual relation. Access is granted depending on the extreme value determined and on the result of the check. | 07-09-2009 |
20090178051 | METHOD FOR IMPLEMENTING DYNAMIC LIFETIME RELIABILITY EXTENSION FOR MICROPROCESSOR ARCHITECTURES - A method for implementing dynamic lifetime reliability extension for microprocessor architectures having a plurality of primary resources and a secondary resource pool of one or more secondary resources includes configuring a resource operational mode controller to selectively switch of the primary and secondary resources between an operational mode and a non-operational mode, wherein the non-operational mode corresponds to a lifetime extension process; configuring a resource mapper associated with the secondary resource pool and in communication with the resource operational mode controller to map a secondary resource placed into the operational mode to a corresponding primary resource placed into the non-operational mode; and configuring a transaction decoder to receive incoming transaction requests and direct the requests to one of a primary resource in the operational mode and a secondary resource in the operational mode, the secondary resource mapped to an associated primary resource placed in the non-operational mode. | 07-09-2009 |
20090187915 | SCHEDULING THREADS ON PROCESSORS - A device, system, and method are directed towards managing threads and components in computer system with one or more processing units. A processor group has an associated hierarchical structure containing nodes that may correspond to processing units, hardware components, or abstractions. The processor group hierarchy may be used to assign one or more threads to one or more processing units, by traversing the hierarchy based on various factors. The factor may include load balancing, affinity, sharing of components, loads, capacities, or other characteristics of components or threads. A processor group hierarchy may be used in conjunction with a designated processor set. | 07-23-2009 |
20090193425 | METHOD AND SYSTEM OF MANAGING RESOURCE LEASE DURATION FOR ON-DEMAND COMPUTING - A method and system of managing resource lease duration for on-demand computing is provided. The system can include one or more resources having a metric capturing tool; and a provisioning manager in communication with the one or more resources. The provisioning manager can receive a request for at least one resource from the requester. The provisioning manager can provision the at least one resource from the one or more resources. The metric capturing tool can communicate one or more metrics associated with performance of the at least one resource to the provisioning manager. The provisioning manager can determine a lease modifier based at least in part on the one or more metrics. The provisioning manager can adjust a lease duration for the at least one resource based at least in part on the lease modifier. | 07-30-2009 |
20090193426 | SYSTEM AND METHOD FOR DESCRIBING APPLICATIONS FOR MANAGEABILITY AND EFFICIENT SCALE-UP DEPLOYMENT - Systems, methods and computer storage media for operating a scalable computing platform are provided. A service description describing a requested service is received. Upon receiving the service description a determination of the required resources and the available resources is made. An instance description is produced. The resources required to sustain the deployment of the service are mapped to the available resources of the computing platform so the service may be deployed. The instance description is amended with each deployment of the service to allow for sustained deployment of the service. | 07-30-2009 |
20090193427 | MANAGING PARALLEL DATA PROCESSING JOBS IN GRID ENVIRONMENTS - Method, system, and computer program product for managing parallel data processing jobs in grid environments are provided. A request to deploy a parallel data processing job in a grid environment is received. A plurality of resource nodes in the grid environment are dynamically allocated to the parallel data processing job. A configuration file is automatically generated for the parallel data processing job based on the allocated resource nodes. The parallel data processing job is then executed in the grid environment using the generated configuration file. | 07-30-2009 |
20090199192 | Resource scheduling apparatus and method - Embodiments of the invention are concerned with allocating resources to tasks and have particular application to situations where the availability of resources and the tasks to be performed change dynamically and the resources are mobile. | 08-06-2009 |
20090199193 | SYSTEM AND METHOD FOR MANAGING A HYBRID COMPUTE ENVIRONMENT - Disclosed are systems, hybrid compute environments, methods and computer-readable media for dynamically provisioning nodes for a workload. In the hybrid compute environment, each node communicates with a first resource manager associated with the first operating system and a second resource manager associated with a second operating system. The method includes receiving an instruction to provision at least one node in the hybrid compute environment from the first operating system to the second operating system, after provisioning the second operating system, pooling at least one signal from the resource manager associated with the at least one node, processing at least one signal from the second resource manager associated with the at least one node and consuming resources associated with the at least one node having the second operating system provisioned thereon. | 08-06-2009 |
20090199194 | Mechanism to Prevent Illegal Access to Task Address Space by Unauthorized Tasks - A method and data processing system for tracking global shared memory (GSM) operations to and from a local node configured with a host fabric interface (HFI) coupled to a network fabric. During task/job initialization, the system OS assigns HFI window(s) to handle the GSM packet generation and GSM packet receipt and processing for each local task. HFI processing logic automatically tags each GSM packet generated by the HFI window with a global job identifier (ID) of the job to which the local task is affiliated. The job ID is embedded within each GSM packet placed on the network fabric. On receipt of a GSM packet from the network fabric, the HFI logic retrieves the embedded job ID and compares the embedded job ID with the ID within the HFI window(s). GSM packets are forwarded to an HFI window only when the embedded job ID matches the HFI window's job ID. | 08-06-2009 |
20090199195 | Generating and Issuing Global Shared Memory Operations Via a Send FIFO - A method for issuing global shared memory (GSM) operations from an originating task on a first node coupled to a network fabric of a distributed network via a host fabric interface (HFI). The originating task generates a GSM command within an effective address (EA) space. The task then places the GSM command within a send FIFO. The send FIFO is a portion of real memory having real addresses (RA) that are memory mapped to EAs of a globally executing job. The originating task maintains a local EA-to-RA mapping of only a portion of the real address space of the globally executing job. The task enables the HFI to retrieve the GSM command from the send FIFO into an HFI window allocated to the originating task. The HFI window generates a corresponding GSM packet containing GSM operations and/or data, and the HFI window issues the GSM packet to the network fabric. | 08-06-2009 |
20090199196 | AUTOMATIC BASELINING OF RESOURCE CONSUMPTION FOR TRANSACTIONS - An application monitoring system determines the health of one or more resources used to process a transaction, business application, or other computer process. Performance data is generated in response to monitoring application execution and processed to determine and an actual and baseline value for resource usage data. Resource usage baseline data may be determined from previous resource usage data associated with a resource and particular transaction (a resource-transaction pair). The baseline values are compared to actual values to determine a deviation for the actual value. Deviation information for the time series data can be reported through an interface or some other manner. | 08-06-2009 |
20090199197 | Wake-and-Go Mechanism with Dynamic Allocation in Hardware Private Array - A wake-and-go mechanism is provided for a data processing system. When a thread is waiting for an event, rather than performing a series of get-and-compare sequences, the thread updates a wake-and-go array with a target address associated with the event. The wake-and-go mechanism may save the state of the thread in a hardware private array. The hardware private array may comprise a plurality of memory cells embodied within the processor or pervasive logic associated with the bus, for example. Alternatively, the hardware private array may be embodied within logic associated with the wake-and-go storage array. | 08-06-2009 |
20090199198 | MULTINODE SERVER SYSTEM, LOAD DISTRIBUTION METHOD, RESOURCE MANAGEMENT SERVER, AND PROGRAM PRODUCT - A multinode server system including application execution means. The application execution means includes several servers mutually connected, each of which processes one mesh obtained by dividing a virtual space. The virtual space is displayed as the result of processing of each mesh by the several servers. Resource management means detects load states of the servers, and changes allocation of the servers to process the meshes in accordance with the load states. Network means allow several clients to share the virtual space via a network. The servers processing the meshes are changed while giving priority to an adjacent mesh beyond a server border in response to the load states. | 08-06-2009 |
20090204971 | AUTOMATED ACCESS POLICY TRANSLATION - The use of one resource access policy to populate a second resource access policy. One of more fields of the first resource access policy are each to be used to populate corresponding one or more fields of the second resource access policy. After identifying the field(s) of the first resource access policy, and identifying their corresponding field of the second resource access policy, the information from the source fields of the first resource access policy are then used to populate the destination fields of the second resource access policy. This may be done in an automated fashion thereby allowing for at least the possibility of the transition from one type of resource access security to another. | 08-13-2009 |
20090204972 | AUTHENTICATING A PROCESSING SYSTEM ACCESSING A RESOURCE - Provided are a method, system, and article of manufacture for authenticating a processing system accessing a resource. An association of processing system identifiers with resources, including a first and second resources, is maintained. A request from a requesting processing system in a host is received for use of a first resource that provides access to a second resource, wherein the request is generated by processing system software and wherein the request further includes a submitted processing system identifier included in the request by host hardware in the host. A determination is made as to whether the submitted processing system identifier is one of the processing system identifiers associated with the first and second resources. The requesting processing system is provided access to the first resource that the processing system uses to access the second resource. | 08-13-2009 |
20090210880 | SYSTEMS AND METHODS FOR MANAGING SEMANTIC LOCKS - In one embodiment, a system for managing semantic locks and semantic lock requests for a resource is provided. Access to the resource is controlled such that compatible lock requests can access the resource and incompatible lock requests are queued. | 08-20-2009 |
20090217279 | Method and Device for Controlling a Computer System - A method and device for controlling a computer system having at least two execution units, a switchover taking place between at least two operating modes, and a first operating mode corresponds to a compare mode, and a second operating mode corresponds to a performance mode, wherein at least one set of run-time objects is defined, and a control program is provided, in particular a scheduler, which assigns resources of the computer system to the run-time objects as a function of an item of information regarding the operating mode. | 08-27-2009 |
20090217280 | Shared-Resource Time Partitioning in a Multi-Core System - An improvement to computing systems is introduced that allows a hardware controller to be configured to time partition a shared system resource among multiple processing elements, according to one embodiment. For example, a memory controller may partition shared memory and may include processor-accessible registers for configuring and storing a rate of resource budget replenishment (e.g. size of a repeating arbitration window), a time budget allocated among each entity that shares the resource, and a selection of a hard or soft partitioning policy (i.e. whether to utilize slack bandwidth). An additional feature that may be incorporated in a main-memory-access time-partitioning application is an accounting policy to ensure that cache write-backs prompted by snoop transactions are charged to the data requester rather than to the responder. Additionally, an arbiter may prioritize requests from particular requesting entities. | 08-27-2009 |
20090217281 | Adaptable Redundant Bit Steering for DRAM Memory Failures - A method, computer program product and computer system for assigning computing resources in a computer system to solve multiple problems where tolerances to the problems are countable and have pre-set thresholds, and solutions to the problems share resources exclusively. The method, computer program product and system include counting the tolerances using at least one counter, assigning resources to solve a problem if the tolerance to the problem is higher than a first pre-set threshold, and reassigning resources to solve a second problem if the tolerance to the second problem is higher than a second pre-set threshold. The method, computer program product and system can also adopt an alternative solution that does not share resources exclusively with a current solution to solve the problems. | 08-27-2009 |
20090217282 | PREDICTING CPU AVAILABILITY FOR SHORT TO MEDIUM TIME FRAMES ON TIME SHARED SYSTEMS - A computer implemented CPU utilization prediction technique is provided. CPU utilization prediction is implemented described in continuous time as an auto-regressive process of the first order. The technique used the inherent autocorrelation between successive CPU measurements. A specific auto-regression equation for predicting CPU utilization is provided. CPU utilization prediction is used in a computer cluster environment. In an implementation, CPU utilization percentage values are used by a scheduler service to manage workload or the distribution of requests over a vast number of CPUs. | 08-27-2009 |
20090217283 | SYSTEM UTILIZATION THROUGH DEDICATED UNCAPPED PARTITIONS - Improving system resource utilization in a data processing system is provided. A determination is made as to whether there is at least one ceded virtual processor in a plurality of virtual processors in a shared resource pool. Responsive to existence of the at least one ceded virtual processor, a determination is made as to whether there is at least one dedicated logical partition configured for a hybrid mode. Responsive to identifying at least one hybrid configured dedicated logical partition, a determination is made as to whether the at least one hybrid configured dedicated logical partition requires additional virtual processor cycles. If the at least one hybrid configured dedicated logical partition requiring additional virtual processor cycles, the at least one ceded virtual processor is deallocated from the plurality of virtual processors and allocated to a surrogate resource pool for use by the at least one hybrid configured dedicated logical partition. | 08-27-2009 |
20090217284 | PASSING INITIATIVE IN A MULTITASKING MULTIPROCESSOR ENVIRONMENT - A computer program product for passing initiative in a multitasking multiprocessor environment includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes writing a request to process a resource of the environment to an associated resource control block, setting a resource flag in a central bit vector, the resource flag indicating that a request for processing has been received for the resource, and setting a target processor initiative flag in the environment, the target processor initiative flag indicating a pass of initiative to a target processor responsible for the resource. | 08-27-2009 |
20090217285 | INFORMATION PROCESSING SYSTEM AND COMPUTER CONTROL METHOD - A first program obtains calculation resource information for determining computer calculation resource to be used by one of a plurality of second programs, one to be begun being executed by the computer, and releases, based on the calculation resource information obtained, a part of the computer calculation resource currently used by the first program. A second program is executed, using the released computer calculation resource. An information processing system comprises a parallel execution condition information obtaining unit for obtaining parallel execution condition information indicating condition of one of the plurality of second programs, one to be executed in parallel to the first program, the condition being set according to the first program and, and an execution restricting unit for restricting execution of a part or all of the plurality of second programs, based on the parallel execution condition information. | 08-27-2009 |
20090222832 | SYSTEM AND METHOD OF ENABLING RESOURCES WITHIN AN INFORMATION HANDLING SYSTEM - A system and method of enabling resources within an information handling system is disclosed. In one form, an information handling system can include an event detection module operable to detect user initiated events and non-user initiated events. The information handling system can also include a resource allocation module coupled to the event detection module. In one form, the resource allocation module can be operable to map a first detected event to a first operating state of a first processing system. The information processing system can also include a second processing system responsive to the resource allocation module and operable to access a shared resource of the first processing system. The resource allocation module can be configured to initiate an outputting of information intended to be output by the second processing system using a shared resource of the first processing system. | 09-03-2009 |
20090222833 | CODELESS PROVISIONING SYNC RULES - Managing resources. A computing environment may include a resource manager. The resource manager includes programmatic code for managing resources. Expected rule entries are added to an expected rules list. Each of the expected rule entries includes: an indicator used to identify a synchronization rule, a definition of flow type, a specification of an object type in the resource manager to which the synchronization rule applies, a specification of a downstream resource system, a specification of an object type in the downstream resource system to which the synchronization rule applies, a specification of relationship criteria including one or more conditions for linking objects in the resource manager and the downstream resource system, and a specification of attribute flow information. Objects in downstream resource systems can be synchronized with objects in the resource manager based on the expected rule entries in the expected rules list. | 09-03-2009 |
20090222834 | CODELESS PROVISIONING - Managing resources. A resource manager includes programmatic code for managing resources in the computing environment. Resources available from resource systems within the computing environment are managed. Methods may include receiving user input indicating one or more of that a new entity should be added to the resource manager, that an entity represented by an entity object of the resource manager should have permissions removed at the resource manager, or that an entity represented by an entity object of the resource manager should have permissions added at the resource manager. In response to receiving user input, events may be generated and objects created or removed from the resource manager for from downstream resource systems. The events may specify workflows that should be executed to perform synchronization between objects at the resource manager and objects at a downstream resource system by adding or changing rules in an expected rules list. | 09-03-2009 |
20090222835 | Operating System for a Chip Card Comprising a Multi-Tasking Kernel - The invention relates to a method for operating a chip card (C), a microprocessor for being inserted into the chip card (C) and a computer program product, as well as a method for manufacturing and/or for maintaining a chip card (C) which is operated with the help of a method described above. Here central multi-tasking kernel (MTK) is provided, which controls the entire operation of the chip card (C), so that there can be activated a plurality of application programs (A) on the chip card (C) at the same time, an application program (A) also being able to realize security technical functions for the chip card (C). | 09-03-2009 |
20090222836 | System and Method for Implementing a Management Component that Exposes Attributes - Software for providing a management interface comprises a descriptor file comprising at least one type for at least one resource and further comprising at least one attribute for each type. A management component associated with one of the resources describes at least one of the types. The management component is operable to provide a management interface exposing at least one of the attributes associated with each of the one or more types describing the resource, | 09-03-2009 |
20090235265 | METHOD AND SYSTEM FOR COST AVOIDANCE IN VIRTUALIZED COMPUTING ENVIRONMENTS - A method includes monitoring a utilization amount of resources within logical partitions (LPARs) of a plurality of servers and identifying a resource-strained server of the plurality of servers, wherein the resource-strained server includes a plurality of LPARs. Additionally, the method includes determining a migration of one or more LPARs of the plurality of LPARs of the resource-strained server and migrating the one or more LPARs of the resource-strained server to another server of the plurality of servers based on the determining to avoid an activation of capacity upgrade on demand (CUoD). | 09-17-2009 |
20090235266 | Operating System and Augmenting Operating System and Method for Same - A method for determining status of system resources in a computer system includes loading a first operating system into a first memory, wherein the first operating system discovers system resources and reserves a number of the system resources for use of an augmenting operating system, loading the augmenting operating system into a second memory reserved for the augmenting operating system by the first operating system, accessing the first memory from the augmenting operating system and obtaining data, running a process on the augmenting operating system to perform a computation using the data obtained from the first memory, and outputting the results of the computation using the system resources reserved for the augmenting operating system. | 09-17-2009 |
20090235267 | CONSOLIDATED DISPLAY OF RESOURCE PERFORMANCE TRENDS - A consolidated representation of performance trends for a plurality of resources in a data processing system is generated. Recent performance measurement data for the plurality of resources is retrieved along with historical performance measurement data for the plurality of resources. For each resource, an associated performance trend is determined based on an analysis of the recent performance measurement data and the historical performance measurement data. A single consolidated graphical representation of the plurality of resources is generated based on the associated performance trends. Each resource in the plurality of resources may have a separate representation within the single consolidated graphical representation positioned within the single consolidated graphical representation based on a recent performance trend and an associated historical performance trend. The single consolidated graphical representation may be output for use by a user to identify areas of the data processing system requiring the user's attention. | 09-17-2009 |
20090241121 | Device, Method and Computer Program Product for Monitoring Collaborative Tasks - A method for controlling collaborate tasks, the method includes: receiving a request to initiate a collaborative task that is associated with an assignment; and responding to the request in response to an assignment resource utilization policy. | 09-24-2009 |
20090241122 | SELECTING A NUMBER OF PROCESSING RESOURCES TO RUN AN APPLICATION EFFECTIVELY WHILE SAVING POWER - Selecting a number of processors to run an application in order to save power is performed. A number of code segments are selected from an application. Each of the code segments are executed using two or more of a plurality of processing resource combinations. Each of the code segments are scored with a performance value. The performance value indicates a performance of each code segment using each of the two or more processing resource combinations. A selection is made of one of the two or more processing resource combinations based on an associated performance value and a number of processing resources used to execute the code segment. The application is then executed using the selected processing resource combination. | 09-24-2009 |
20090241123 | METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING WORK IN A STREAM-ORIENTED COMPUTER SYSTEM WITH CONFIGURABLE NETWORKS - A method, apparatus, and computer program product for scheduling stream-based applications in a distributed computer system with configurable networks are provided. The method includes choosing, at a highest temporal level, jobs that will run, an optimal template alternative for the jobs that will run, network topology, and candidate processing nodes for processing elements of the optimal template alternative for each running job to maximize importance of work performed by the system. The method further includes making, at a medium temporal level, fractional allocations and re-allocations of the candidate processing elements to the processing nodes in the system to react to changing importance of the work. The method also includes revising, at a lowest temporal level, the fractional allocations and re-allocations on a continual basis to react to burstiness of the work, and to differences between projected and real progress of the work. | 09-24-2009 |
20090249350 | Resource Allocation Through Negotiation - Improved resource allocation methods which use negotiation are described. In an embodiment, a request for access to a resource by a service user is received and an available access slot is allocated, where the slot may be a time or a position in a queue. This allocated slot may or may not meet the service user's requirements and if this allocated time does not meet the service user's requirements, an access time which does meet the requirements but is already allocated to another service user is identified. A message is sent to the user device associated with the other service user requesting a change in allocated access time. If the change is accepted the allocated times are swapped between the two service users. | 10-01-2009 |
20090249351 | Round-Robin Apparatus and Instruction Dispatch Scheduler Employing Same For Use In Multithreading Microprocessor - An apparatus for selecting one of N requesters of a shared resource in a round-robin fashion is disclosed. One or more of the N requestors may be disabled from being selected in a selection cycle. The apparatus includes a first input that receives a first value specifying which of the N requestors was last selected. A second input receives a second value specifying which of the N requestors is enabled to be selected. A barrel incrementer, coupled to receive the first and second inputs, 1-bit left-rotatively increments the second value by the first value to generate a sum. Combinational logic, coupled to the barrel incrementer, generates a third value specifying which of the N requestors is selected next. | 10-01-2009 |
20090254916 | ALLOCATING RESOURCES FOR PARALLEL EXECUTION OF QUERY PLANS - Computing resources can be assigned to sub-plans within a query plan to effect parallel execution of the query plan. For example, computing resources in a grid can be represented by nodes, and a shortest path technique can be applied to allocate machines to the sub-plans. Computing resources can be provisionally allocated as the query plan is divided into query plan segments containing one or more sub-plans. Based on provisional allocations to the segments, the computing resources can then be allocated to the sub-plans within respective segments. Multiprocessor computing resources can be supported. The techniques can account for data locality. Both pipelined and partitioned parallelism can be addressed. Described techniques can be particularly suited for efficient execution of bushy query plans in a grid environment. Parallel processing will reduce the overall response time of the query. | 10-08-2009 |
20090254917 | SYSTEM AND METHOD FOR IMPROVED I/O NODE CONTROL IN COMPUTER SYSTEM - A computer system is provided with a file system storing data; a plurality of I/O nodes which are adapted to access the file system; a compute node adapted to execute a job and to issue an I/O request when requiring an I/O operation; and a job server for job scheduling which dynamically allocates an I/O resource of the I/O nodes to a job without stopping execution of the job. The job server includes an I/O node scheduler adapted to, when being not able to fully secure an desired amount of the I/O resource of the I/O nodes required by the job in starting the job, secure a part of the required amount of the I/O resource of the I/O nodes, and to allocate the secured part of the I/O resource to the job. | 10-08-2009 |
20090260014 | APPARATUS, AND ASSOCIATED METHOD, FOR ALLOCATING PROCESSING AMONGST DATA CENTERS - Apparatus, and an associated method, for facilitating optimization of data center performance. An optimization decision engine is provided with information regarding energy credentials of the power generative facilities that power the respective data centers. The energy credential, or other energy indicia, information is used in an optimization decision. Responsive to the optimization decision, processing allocation is made. | 10-15-2009 |
20090260015 | SOFTWARE PIPELINING - A software pipelining method for generating a schedule for executing a plurality of instructions on a processor, the plurality of instructions involving one or more variables, the processor having one or more physical registers, the method comprising the step of scheduling each of the plurality of instructions, determining whether there is a variable for which there is less than a threshold number of physical registers to which that variable may be allocated, and unscheduling a currently scheduled instruction when there is a variable for which there is less than the threshold number of a physical registers to which that that variable may be allocated. | 10-15-2009 |
20090271797 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND MEDIUM STORING INFORMATION PROCESSING PROGRAM STORED THEREON - An information processing apparatus including at least one first processing unit that manages a resource and at least one second processing unit that accesses the resource, wherein the second processing unit stores a table in which an identifier identifying the resource is associated with the resource, and when accessing the resource, refers to the table and requests the first processing unit to allocate the identifier associated with the resource to the resource. | 10-29-2009 |
20090276783 | Expansion and Contraction of Logical Partitions on Virtualized Hardware - A method, apparatus, and program product manage a plurality of resources of at least one logically partitioned computing system of the type that includes a plurality of logical partitions managed by a partition manager with an application level administrative console resident in a logical partition of the computing system. Each logical partition is allocated at least a portion of the plurality of resources. A user request to adjust the allocation of at least a portion of the resources using the administrative console is received. The resources of the logically partitioned computing system to adjust in order to satisfy the user request are determined using the application level administrative console. The application level administrative console accesses the partition manager through a resource allocation interface to adjust the determined resources of the logically partitioned computing system in order to satisfy the user request. | 11-05-2009 |
20090276784 | RESOURCE MANAGEMENT METHOD - There is provided a method of managing a resource within a computer system using a configuration wrapper, the method comprising: providing a configuration file comprising configuration data for the resource; generating metadata related to the configuration data; and automatically processing the metadata to produce a configuration wrapper for the resource. The configuration wrapper may be a java object with management attributes and methods. | 11-05-2009 |
20090276785 | System and Method for Managing a Storage Array - Systems and methods for managing a storage array are disclosed. A method may include segmenting each of a plurality of physical storage resources into a first storage area and a second storage area. The method may also include activating a first logical unit including each first storage area of the plurality of physical storage resources. The method may additionally include placing at least one designated physical resource of the plurality of physical storage resources in a powersave mode. The method may further include activating a second logical unit including the second storage areas of some of the plurality of physical storage resources but not the at least one designated physical storage resource. Moreover, the method may include storing data associated with a write operation intended for the at least one designated physical storage resource to the second logical unit. | 11-05-2009 |
20090276786 | Resource Data Management - In an illustrative embodiment, a data processing system for resource data management is provided. The data process system comprises a set of data structures defining resource relationships and locations for a set of resources to form defined resource relationships and defined locations for the set of resources, and a receiver capable of obtaining replaceable unit data and obtaining characterization data for a current resource in the set of resources to form obtained replaceable unit data and obtained characterization data for the current resource, wherein the obtained replaceable unit data is obtained from a secure device and the obtained characterization data is obtained from an unsecure device. The data processing system further comprises a writer capable of merging the obtained replaceable unit data for the current resource with the obtained characterization data for the current resource for each resource of the set of resources to form a set of data files, wherein each data file corresponds to a resource in the set of resources. | 11-05-2009 |
20090282415 | Method and Apparatus for Negotiation Management in Data Processing Systems - Techniques are disclosed for optimizing schedules used in implementing plans for performing tasks in data processing systems. For example, an automated method of negotiating for resources in a data processing system, wherein the data processing system comprises multiple sites, comprises a negotiation management component of a computer system at a given one of the sites performing the following steps. One or more tasks from at least one source of one or more plans are obtained. Each plan is annotated with one or more needed resources and one or more potential resource providers at one or more sites in the data processing system. An optimized resource negotiation schedule based on the one or more obtained tasks is computed. The schedule comprises an order in which resources are negotiated. In accordance with the optimized resource negotiation schedule, a request for each needed resource is sent to the one or more potential resource providers such that a negotiation process is performed between the negotiation management component and at least one of the potential resource providers. | 11-12-2009 |
20090282416 | VITAL PRODUCT DATA COLLECTION DURING PRE-STANDBY AND SYSTEM INITIAL PROGRAM LOAD - A system for selectively recollecting vital product data during an initial program load at data processing system power on. In response to receiving an input to power on a data processing system, a resource location code array table is accessed within a set of selected tables for the data processing system based on machine type. The selected set of tables is located in firmware within a service processor. An entry for a resource in the resource location code array table is read to determine whether the entry includes a no recollect tag. Then, in response to determining that the entry for the resource in the resource location code array table does include a no recollect tag, vital product data for the resource is not recollected during the initial program load. | 11-12-2009 |
20090282417 | WORKFLOW EXECUTING APPARATUS, WORKFLOW EXECUTING METHOD, AND STORAGE MEDIUM - A workflow executing method to execute a workflow of a plurality of steps according to a workflow definition. The method includes obtaining setting information of a user instructing execution of the workflow, which is setting information related to the execution of the workflow. The method also includes modifying the workflow definition corresponding to the workflow of which the user instructed execution, based on the obtained setting information. The method continues by dividing the workflow definition modified with the modifying unit for each workflow executing apparatus that executes the workflow definition. The method also includes executing at least one of the divided workflow definitions and sending at least one divided workflow definition to another workflow executing apparatus that executes processing based on the workflow definition, whereby workflow definitions are modified to match user settings, and the modified workflow definitions are divided to match apparatuses executing the workflow definition. | 11-12-2009 |
20090288091 | Method and System Integrating Task Assignment and Resources Scheduling - A method and a system for integrating and solving simultaneously both task assignment and resources scheduling decision making problems, thereby providing an overall feasible and optimal solution. The method and the system may be used for integrated airline scheduling in which case the task assignment is fleet assignment, and the resources scheduling are aircraft routing with maintenance (maintenance routing) and crew scheduling (or crew pairing only). In a preferred embodiment, Benders decomposition is employed with Pareto-optimal cuts, where the Benders subproblem solution is sped-up without influencing Pareto-optimal cut generation. The cost savings achieved in comparison with traditional methods are estimated, so that the user can terminate the solution process when these cost savings are satisfactory enough. Important properties of the solution are stored enabling the user to efficiently re-solve the problem even in cases where it is different from the initial one. | 11-19-2009 |
20090288092 | Systems and Methods for Improving the Reliability of a Multi-Core Processor - Systems and methods for improving the reliability of multiprocessors by reducing the aging of processor cores that have lower performance. One embodiment comprises a method implemented in a multiprocessor system having a plurality of processor cores. The method includes determining performance levels for each of the processor cores and determining an allocation of the tasks to the processor cores that substantially minimizes aging of a lowest-performing one of the operating processor cores. The allocation may be based on task priority, task weight, heat generated, or combinations of these factors. The method may also include identifying processor cores whose performance levels are below a threshold level and shutting down these processor cores. If the number of processor cores that are still active is less than a threshold number, the multiprocessor system may be shut down, or a warning may be provided to a user. | 11-19-2009 |
20090288093 | MECHANISM TO BUILD DYNAMIC LOCATIONS TO REDUCE BRITTLENESS IN A TEAM ENVIRONMENT - Mechanisms to build dynamic locations to reduce brittleness in a team environment are provided. A project includes resources, each resource is assigned a key. Each key is mapped to a current location for its corresponding resource. The keys and locations are maintained in an index. Locations for the resources can change as desired throughout the lifecycle of the project and as changes occur the index is updated. When references are made within the project to the resources, the references are translated to the keys, if necessary. The keys are then used for accessing the index and dynamically acquiring the current locations for the resources at the time the references are made. | 11-19-2009 |
20090288094 | Resource Management on a Computer System Utilizing Hardware and Environmental Factors - A method for resource management on a computer system utilizing hardware and environmental information. A caller interacts with an application program interface to handle information requests with a persistent data storage device to combine information involving hardware resource information, environmental data and other system information, all both historical, present and predicted values. Application execution decisions may then made regarding hardware for the calling entity. The method may be implemented as a computer process. | 11-19-2009 |
20090293062 | Method for Dynamically Freeing Computer Resources - A method dynamically frees computer resources in a multitasking and windowing environment by activating a GUI widget to initiate pausing of an application, pausing CPU processing of the application code, maintaining data of the application in main memory, storing state information for the application code and a process of the application in mass storage, removing the application code from main memory to mass storage, when another application requires additional memory, activating another GUI widget to resume running of the application, restoring the state information for the code and the process to main memory before the application resumes running, and resuming the CPU processing of the application. | 11-26-2009 |
20090293063 | MINIMIZATION OF READ RESPONSE TIME - A method, system and computer program product for minimizing read response time in a storage subsystem including a plurality of resources is provided. A middle logical block address (LBA) is calculated for a read request. A preferred resource of the plurality of resources is determined by calculating a minimum seek time based on a closest position to a last position of a head at each resource of the plurality of resources, estimated from the middle LBA. The read request is directed to at least one of the preferred resource or an alternative resource. | 11-26-2009 |
20090293064 | SYNCHRONIZING SHARED RESOURCES IN AN ORDER PROCESSING ENVIRONMENT USING A SYNCHRONIZATION COMPONENT - An order processing system including an order processing container, a factory registry, a relationship registry, and synchronization function component. The order processing system can handle orders, which are build plans including a set of tasks. The tasks can specify programmatic actions which may include creation, deletion, and modification of resources and resource topologies. The order processing container can be central engine that programmatically drives order processing actions. The factory registry can support a creation and deletion of resource instances in a resource topology defined by at least one order. The relationship registry can maintain relationships among resources. The synchronization function component can permit transparent usage of shared resources in accordance with shared usage resource topology parameters specified within processed orders. | 11-26-2009 |
20090300634 | Method and System for Register Management - A system and method of allocating registers in a register array to multiple workloads is disclosed. The method identifies an incoming workload as belonging to a first process group or a second process group, and allocates one or more target registers from the register array to the incoming workload. The register array is logically divided to a first ring and a second ring such that the first ring and the second ring have at least one register in common. The first process group is allocated registers in the first ring and the second process group is allocated registers in the second ring. Target registers in the first ring are allocated in order of sequentially decreasing register addresses and target registers in the second ring are allocated in order of sequentially increasing register addresses. Also disclosed are methods and systems for allocation of registers in an array of general purpose registers, methods and systems for allocation of registers to processes including shader processes in graphics processing units. | 12-03-2009 |
20090300635 | METHODS AND SYSTEMS FOR PROVIDING A MARKETPLACE FOR CLOUD-BASED NETWORKS - A cloud marketplace system can be configured to communicate with multiple cloud computing environments in order to ascertain the details for the resources and services provided by the cloud computing environments. The cloud marketplace system can be configured receive a request for information pertaining to the resources or services provided by or available in the cloud computing environments. The cloud marketplace system can be configured to generate a marketplace report detailing the resource and service data matching the request. The cloud marketplace system can be configured to utilize the resource and service data to provide migration services for virtual machines initiated in the cloud computing environments. | 12-03-2009 |
20090300636 | REGAINING CONTROL OF A PROCESSING RESOURCE THAT EXECUTES AN EXTERNAL EXECUTION CONTEXT - A scheduler in a process of a computer system allows an external execution context to execute on a processing resource allocated to the scheduler. The scheduler provides control of the processing resource to the external execution context. The scheduler registers for a notification of an exit event associated with the external execution context. In response to receiving the notification that the exit event has occurred, the scheduler regains control of the processing resource and causes a task associated with an execution context controlled by the scheduler to be executed by the processing resource. | 12-03-2009 |
20090300637 | SCHEDULER INSTANCES IN A PROCESS - A runtime environment of a computer system is provided that creates first and second scheduler instances in a process. Each scheduler instance includes allocated processing resources and is assigned a set of tasks for execution. Each scheduler instance schedules tasks for execution using the allocated processing resources to perform the work of the process. | 12-03-2009 |
20090300638 | MEMORY ALLOCATORS CORRESPONDING TO PROCESSOR RESOURCES - A memory allocator is provided for each processor resource in a process of a computer system. Each memory allocator includes a set of pages, a locally freed list of objects, and a remotely freed list of objects. Each memory allocator requests the pages from an operating system and allocates objects to all execution contexts executing on a corresponding processing resource. Each memory allocator attempts to allocate an object from the locally freed list before allocating an object from the remotely freed list or an allocated page. | 12-03-2009 |
20090300639 | RESOURCE ACQUISITION AND MANIPULATION FROM WITHIN A VIRTUAL UNIVERSE - The present invention is directed to a system, method and program product that allows a user to access resources on a local computer during a session with a virtual universe. Disclosed is a system that obtains an inventory of resources from the client computer and generates renderings of the resources in the virtual universe. Also included is a resource interaction system for allowing an avatar to interact with the resources in the virtual universe, wherein the resource interaction system provides a transport facility for loading resources from the client computer to the virtual universe. | 12-03-2009 |
20090300640 | ALLOCATION IDENTIFICATION APPARATUS OF I/O PORTS, METHOD FOR IDENTIFYING ALLOCATION THEREOF AND INFORMATION PROCESSOR - An allocation identification apparatus of input/output ports of an information processor (PC) operated as two or more virtual information processors, includes input/output ports (I/O ports) allocated to the virtual information processors, an identification information generating part (a hyper visor) that identifies the virtual information processors to which the input/output ports of the information processor are assigned and that generates identification information thereof, and a display part that displays the identification information generated by the identification information generating part. | 12-03-2009 |
20090300641 | SYSTEM AND METHOD FOR SUPPORTING A VIRTUAL APPLIANCE - A system and method for supporting a virtual appliance is provided. In particular, a support engine may include an update server that can manage a workflow to update an appliance in response to detecting upstream updates to one or more software components that have been installed for the appliance. For example, the workflow may generally include managing a rebuild the appliance to install the upstream updates and further managing an integration test to verify that the rebuilt appliance behaves correctly with the upstream updates installed. In addition, the support engine may further include a support analysis manager that can analyze the software components that have been installed for the appliance in view of various heuristic rules to generate a support statement indicating whether support is available for the appliance. | 12-03-2009 |
20090307701 | INFORMATION PROCESSING METHOD AND APPARATUS USING THE SAME - A processor processes the task A and the task B sequentially, wherein the task A performs an application to generate data that should be output to or input from an HDD, and the task B controls a data input and output request to the HDD controller. | 12-10-2009 |
20090307702 | SYSTEM AND METHOD FOR DISCOVERING AND PROTECTING ALLOCATED RESOURCES IN A SHARED VIRTUALIZED I/O DEVICE - A system includes a virtualized I/O device coupled to one or more processing units. The virtualized I/O device includes a storage for storing a resource discovery table, and programmed I/O (PIO) configuration registers corresponding to hardware resources. A system processor may allocate the plurality of hardware resources to one or more functions, and to populate each entry of the resource discovery table for each function. The processing units may execute one or more processes. Given processing units may further execute OS instructions to allocate space for an I/O mapping of a PIO configuration space in a system memory, and to assign a function to a respective process. Processing units may execute a device driver instance associated with a given process to discover allocated resources by requesting access to the resource discovery table. The virtualized I/O device protects the resources by checking access requests against the resource discovery table. | 12-10-2009 |
20090307703 | Scheduling Applications For Execution On A Plurality Of Compute Nodes Of A Parallel Computer To Manage temperature of the nodes during execution - Methods, apparatus, and products are disclosed for scheduling applications for execution on a plurality of compute nodes of a parallel computer to manage temperature of the plurality of compute nodes during execution that include: identifying one or more applications for execution on the plurality of compute nodes; creating a plurality of physically discontiguous node partitions in dependence upon temperature characteristics for the compute nodes and a physical topology for the compute nodes, each discontiguous node partition specifying a collection of physically adjacent compute nodes; and assigning, for each application, that application to one or more of the discontiguous node partitions for execution on the compute nodes specified by the assigned discontiguous node partitions. | 12-10-2009 |
20090307704 | MULTI-DIMENSIONAL THREAD GROUPING FOR MULTIPLE PROCESSORS - A method and an apparatus that determine a total number of threads to concurrently execute executable codes compiled from a single source for target processing units in response to an API (Application Programming Interface) request from an application running in a host processing unit are described. The target processing units include GPUs (Graphics Processing Unit) and CPUs (Central Processing Unit). Thread group sizes for the target processing units are determined to partition the total number of threads according to a multi-dimensional global thread number included in the API request. The executable codes are loaded to be executed in thread groups with the determined thread group sizes concurrently in the target processing units. | 12-10-2009 |
20090307705 | SECURE MULTI-PURPOSE COMPUTING CLIENT - A method includes, in a computer that runs multiple operating environments using hardware resources, defining and managing an allocation policy of the hardware resources, which eliminates effects from operations performed in one of the operating environments on the operations performed in another of the operating environments. The hardware resources are assigned to the multiple operating environments in accordance with the allocation policy, so as to isolate the multiple operating environments from one another. | 12-10-2009 |
20090307706 | Dynamically Setting the Automation Behavior of Resources - Embodiments provide a method of dynamically setting the automation behavior of resources via switching between an active mode and a passive mode. One embodiment is a method that includes placing a first computing resource into a first desired state and an active behavioral mode and placing a second computing resource having a relationship to the first resource into the first desired state when a first request for the first resource that specifies the first desired state is received. The method also includes placing the first computing resource into a standby state and a passive behavioral mode and not placing the second computing resource into the first desired state. | 12-10-2009 |
20090313632 | GENERATING RESOURCE CONSUMPTION CONTROL LIMITS - A resource consumption control method and system. The method includes deploying by a computing system, a portlet/servlet. The computing system receives monitor data associated with a first resource consumed by the first portlet/servlet during the deploying. The monitor data comprises a maximum resource consumption rate value for the portlet/servlet and a mean resource consumption rate value for the portlet/servlet. The computing system generates a resource consumption rate limit value for the first portlet/servlet based on the monitor data. The computing system generates action data comprising an action to be executed if the resource consumption rate limit value is exceeded by a consumption rate value for the portlet/servlet. The computing system transmits the resource consumption rate limit value and the action data to the portlet/servlet. The resource consumption rate limit value and the action data are stored with the portlet/servlet. | 12-17-2009 |
20090313633 | Method and System for Managing a Workload in a Cluster of Computing Systems with Multi-Type Operational Resources - Determining an equivalent capacity (ECP) of a computing system comprising multi-type operational resources. The multi-type operational resources comprises at least one general type of resources and at least one specialized type of resources Parameters characteristic of the performance of the system is determined. Assignment of work units to the various resources subject to pre-defined constraints is simulated. Utilization of said general type of resources of the computing system when executing the work units is calculated. | 12-17-2009 |
20090320035 | System for supporting collaborative activity - A system includes a processor which has access to a representation of model of activity, which includes workspaces. Each workspace includes domain hierarchies for representing an organizational structure of the collaborating users using the system, and initiatives hierarchies representing process structures for accomplishing goals. An interface permits users to view and modify the workspaces for which the user has access. Each user can have different access permissions in different workspaces. The domain and initiative hierarchies provide two views of the workspace objects without duplicating resources. A resource is a collection of shared elements defined by the users that give users associated with the workspace access to information sources. Users can define knowledge boards for creating reports based on information fields of the resources. The knowledge board is associated with a resource template from which the resource is created. | 12-24-2009 |
20090320036 | File System Object Node Management - Embodiments of the invention provide a method for assigning a home node to a file system object and using information associated with file system objects to improve locality of reference during thread execution. Doing so may improve application performance on a computer system configured using a non-uniform memory access (NUMA) architecture. Thus, embodiments of the invention allow a computer system to create a nodal affinity between a given file system object and a given processing node. | 12-24-2009 |
20090320037 | DATA STORAGE RESOURCE ALLOCATION BY EMPLOYING DYNAMIC METHODS AND BLACKLISTING RESOURCE REQUEST POOLS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan. | 12-24-2009 |
20090328050 | AUTOMATIC LOAD BALANCING, SUCH AS FOR HOSTED APPLICATIONS - A dynamic load balancing system is described that determines the load of resources in a hosted environment dynamically by monitoring the usage of resources by each customer and determines the number of customers hosted by a server based on the actual resources used. The system receives a performance threshold that indicates when a server is too heavily loaded and monitors the resource usage by each customer. When the load of an overloaded server in the hosted environment exceeds the received performance threshold, the system selects a source customer currently hosted by the overloaded server to move to another server. | 12-31-2009 |
20090328051 | RESOURCE ABSTRACTION VIA ENABLER AND METADATA - Embodiments of the invention provide systems and methods for managing an enabler and dependencies of the enabler. According to one embodiment, a method of managing an enabler can comprise requesting a management function via a management interface of the enabler. The management interface can provide an abstraction of one or more management functions for managing the enabler and/or dependencies of the enabler. In some cases, prior to requesting the management function metadata associated with the management interface can be read and a determination can be made as to whether the management function is available or unavailable. Requesting the management function via the management interface of the enabler can be performed in response to determining the management function is available. In response to determining the management function is unavailable, one or more alternative functions can be identified based on the metadata and the one or more Falternative functions can be requested. | 12-31-2009 |
20090328052 | RESOURCE LOCATOR VERIFICATION METHOD AND APPARATUS - A method to be implemented using a computer system, the method comprising the steps of providing a resource database that specifies locations of resources for use by consumers, receiving a location communication originated by a mobile consumer device associated with a consumer at a time temporally proximate a time when the consumer accesses a resource where the location communication indicates the location of the consumer device and using the location of the consumer device indicated in the communication to update the resource database. | 12-31-2009 |
20090328053 | ADAPTIVE SPIN-THEN-BLOCK MUTUAL EXCLUSION IN MULTI-THREADED PROCESSING - Adaptive modifications of spinning and blocking behavior in spin-then-block mutual exclusion include limiting spinning time to no more than the duration of a context switch. Also, the frequency of spinning versus blocking is limited to a desired amount based on the success rate of recent spin attempts. As an alternative, spinning is bypassed if spinning is unlikely to be successful because the owner is not progressing toward releasing the shared resource, as might occur if the owner is blocked or spinning itself. In another aspect, the duration of spinning is generally limited, but longer spinning is permitted if no other threads are ready to utilize the processor. In another aspect, if the owner of a shared resource is ready to be executed, a thread attempting to acquire ownership performs a “directed yield” of the remainder of its processing quantum to the other thread, and execution of the acquiring thread is suspended. | 12-31-2009 |
20100005471 | PRIORITIZED RESOURCE SCANNING - A method for prioritized scanning of resources within an Information Technology (IT) infrastructure includes prioritizing resources by likelihood of each resource being relevant to a target problem and scanning resources that have a higher likelihood of being relevant to the target problem before scanning resources that have a lower likelihood of being relevant to the target problem. A system for prioritized scanning of an IT infrastructure includes a resource list, the resource list identifying at least a portion of resources within the IT infrastructure; a plurality of tags, each of the plurality of tags being associated with a the resource, the plurality of tags being configured to monitor the resources identified in the resource list and generate an output, the output being related to a likelihood that the resources contain information related to a problem within the IT infrastructure; and a scanning program configured to scan resources with a higher likelihood of containing information related to the problem before scanning resources with a lower likelihood of containing information relating to the problem. | 01-07-2010 |
20100005472 | TASK DECOMPOSITION WITH THROTTLED MESSAGE PROCESSING IN A HETEROGENEOUS ENVIRONMENT - Tasks for a business process can be decomposed into subtasks represented by messages. Message processing can be throttled in a heterogeneous environment. For example, message processing at subtask nodes can be individually throttled at the node level by controlling the number of instances of subtask processors for the subtask node. An infrastructure built with framework components can be used for a variety of business process tasks, separating business logic from the framework logic. Thus, intelligent scalability across platform types can be provided for large scale business processes with reduced development time and resources. | 01-07-2010 |
20100005473 | System and method for controlling computing resource consumption - A method and a corresponding system, implemented as programming on a computer system, controls resource consumption in the computer system. The method includes the steps of monitoring current consumption of resources by workloads executing on the computer system; predicting future consumption of the resources by the workloads; adjusting assignment of resources to workloads based on the predicted future consumption, comprising: determining consumption policies for each workload, comparing the policies to the predicted future consumption, and increasing or decreasing resources for each workload based on the comparison; and providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources. | 01-07-2010 |
20100005474 | Distribution of tasks among asymmetric processing elements - A technique to promote determinism among multiple clocking domains within a computer system or integrated circuit. In one embodiment, one or more execution units are placed in a deterministic state with respect to multiple clocks within a processor system having a number of different clocking domains. | 01-07-2010 |
20100005475 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device is configure so as to store an image that is to be retained in a main memory so that a processor can execute an application program, and after execution of the application program is terminated, execute the application program from a state at a time when the image is stored by reading out the stored image to the main memory. | 01-07-2010 |
20100011364 | Data Storage in Distributed Systems - Systems, methods, and apparatus, including computer program products for receiving a content transfer request that includes a first set of provisioning attributes that characterizes one or more operational objectives of a first item of content; and processing the content transfer request to allocate resources of a storage environment to store the first item of content. | 01-14-2010 |
20100011365 | Resource Allocation and Modification - A computer-implemented method includes obtaining information characterizing a level of actual usage of a first item of content; based on the obtained information, determining whether a re-provisioning condition is satisfied and if so, generating a specification of a re-provisioning operation to be executed in association with the resources of a storage environment; and executing the re-provisioning operation. The first item of content is stored on a first set of elements of resources of the storage environment according to a first resource allocation arrangement. The re-provisioning operation includes identifying a second resource allocation arrangement for storing the first item of content; and allocating a second set of elements of the resources of the storage environment according to the second resource allocation arrangement. | 01-14-2010 |
20100011366 | Dynamic Resource Allocation - A computer-implemented method includes detecting an actual workload representative of a pattern of access of a plurality of items of content; comparing the actual workload against a prescriptive workload to determine an occurrence of a substantial deviation from the prescriptive workload; and upon determining the occurrence of the substantial deviation, revising the prescriptive workload based at least in part on the actual workload. The plurality of items is stored on resources of a storage environment according to one of a plurality of resource allocation arrangements. The prescriptive workload including a plurality of categories, each category being associated with a respective one of the plurality of resource allocation arrangements. | 01-14-2010 |
20100011367 | METHODS AND SYSTEMS FOR ALLOCATING A RESOURCE OF A VEHICLE AMONG A PLURALITY OF USES FOR THE RESOURCE - A method for implementing a request pertaining to a requested use of a plurality of uses of a resource of a vehicle includes the steps of determining whether the resource is configured for simultaneous use by two or more of the plurality of uses, determining whether the resource is being used by an existing use of the plurality of uses, and allowing the requested use of the resource and the existing use of the resource, if the resource is configured for simultaneous use by two or more of the plurality of uses and the resource is being used by the existing use. | 01-14-2010 |
20100011368 | Methods, systems and programs for partitioned storage resources and services in dynamically reorganized storage platforms - Exemplary embodiments establish durable partitions that are unified across storage systems and storage server computers. The partitions provide independent name spaces and are able to maintain specified services and conditions regardless of operations taking place in other partitions, and regardless of configuration changes in the information system. A management computer manages and assigns resources and functions provided by storage server computers and storage systems to each partition. By using the assigned resources, a partition is able to provide storage and other services to users and applications on host computers. When a configuration change occurs, such as addition or deletion of equipment, the management computer performs reassignment of resources, manages migration of services and/or data, and otherwise maintains the functionality of the partition for the user or application. Additionally, a partition can be migrated within the information system for various purposes, such as improved performance, load balancing, and the like. | 01-14-2010 |
20100011369 | DEVICE MANAGEMENT APPARATUS, JOB FLOW PROCESSING METHOD, AND TASK COOPERATIVE PROCESSING SYSTEM - In a task cooperative processing system that allows a plurality of task processing devices to execute a plurality of tasks performed on document data as a job flow, task processing devices that can execute a task included in the job flow are decided as candidate task processing devices (S | 01-14-2010 |
20100011370 | CONTROL UNIT, DISTRIBUTED PROCESSING SYSTEM, AND METHOD OF DISTRIBUTED PROCESSING - A control unit includes a determination section that determines information on a type and a function of processing elements connected thereto, a library loading section that loads, as needed, program information or reconfiguration information for hardware included in a connected library into the processing elements, and execution transition information control section that creates, based on information on an arbitrary service comprising a combination of one or more tasks to be executed by the processing elements and information on the type and the function of the processing elements determined by the determination section, execution transition information specifying a combination of processing elements corresponding to the information on service and transmits it to the processing elements. | 01-14-2010 |
20100023947 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR RESOURCE COLLABORATION OPTIMIZATION - A method including receiving a plurality of roles in a data processing system and adding a part-time resource to at least one role. The method also includes determining, in the data processing system, if a project duration has changed as a result of adding the part-time resource, and if the project duration has changed, repeating the process at the adding step. The method also includes storing results corresponding to the resources assigned to roles. There is also a similar data processing system and machine-usable medium. | 01-28-2010 |
20100023948 | ALLOCATING RESOURCES IN A MULTICORE ENVIRONMENT - In a multicore programming environment comprising a plurality of processors in a plurality of categories, and having predetermined communication resources of different types for interconnecting the processors, resources are allocated by: receiving a plurality of software processes, each process having a connection requirement; receiving an allocation scheme, in which each of the software processes is allocated to a respective processor of the plurality of processors; determining a plurality of communication requirements based on the connection requirements and the processors to which each process is allocated; and for each of the communication requirements: determining the respective processors to which the associated processes have been assigned; and allocating a communications resource of a type that is suitable based on the categories of said respective processors, such that the total allocated communications resource does not exceed the predetermined communication resources. | 01-28-2010 |
20100023949 | SYSTEM AND METHOD FOR PROVIDING ADVANCED RESERVATIONS IN A COMPUTE ENVIRONMENT - A system and method are disclosed for dynamically reserving resources within a cluster environment. The method embodiment of the invention comprises receiving a request for resources in the cluster environment, monitoring events after receiving the request for resources and based on the monitored events, dynamically modifying at least one of the request for resources and the cluster environment. | 01-28-2010 |
20100031265 | Method and System for Implementing Realtime Spinlocks - A system and method for receiving a request from a requester for access to a computing resource, instructing the requester to wait for access to the resource when the resource is unavailable and allowing the requester to perform other tasks while waiting, determining whether the requester is available when the resource subsequently becomes available, and granting access to the resource by the requester if the requester is available. | 02-04-2010 |
20100037231 | METHOD FOR READING/WRITING DATA IN A MULTITHREAD SYSTEM - A method for reading/writing data in a multithread system is disclosed. The method includes providing an unprocessed command number of a read/write command waiting queue; providing an expectation read/write thread number according to the unprocessed command number; comparing the expectation read/write thread number with a present read/write thread number; and equalizing the expectation read/write thread number and the present read/write thread number by newly-generating or deleting a read/write thread. | 02-11-2010 |
20100037232 | Virtualization apparatus and method for controlling the same - A virtualization apparatus and a method for controlling the same. In a method for controlling a virtualization apparatus including a plurality of domains, a sub domain transmits an input/output (IO) request for a hardware device to a main domain, and the main domain controls whether or not the IO request accesses the hardware device according to a resource needed to perform the IO request. | 02-11-2010 |
20100037233 | Processor core with per-thread resource usage accounting logic - Processor time accounting is enhanced by per-thread internal resource usage counter circuits that account for usage of processor core resources to the threads that use them. Relative resource use can be determined by detecting events such as instruction dispatches for multiple threads active within the processor, which may include idle threads that are still occupying processor resources. The values of the resource usage counters are used periodically to determine relative usage of the processor core by the multiple threads. If all of the events are for a single thread during a given period, the processor time is allocated to the single thread. If no events occur in the given period, then the processor time can be equally allocated among threads. If multiple threads are generating events, a fractional resource usage can be determined for each thread and the counters may be updated in accordance with their fractional usage. | 02-11-2010 |
20100043005 | SYSTEM RESOURCE MANAGEMENT MODERATOR PROTOCOL - A method, system, and computer program product for managing system resources within a data processing system. A resource management moderator (RMM) utility assigns a priority to each application within a group of management applications, facilitated by a RMM protocol. When a request for control of a particular resource is received, the RMM utility compares the priority of the requesting application with the priority of the controlling application. Control of the resource is ultimately given to the management application with the greater priority. If the resource is not under control of an application, control of the resource may be automatically granted to the requester. Additionally, the RMM utility provides support for legacy applications via a “manager of managers” application. The RMM utility registers the “manager of managers” application with the protocol and enables interactions (to reconfigure and enable legacy applications) between the “manager of managers” application and legacy applications. | 02-18-2010 |
20100043006 | SYSTEMS AND METHODS FOR A CONFIGURABLE DEPLOYMENT PLATFORM WITH VIRTUALIZATION OF PROCESSING RESOURCE SPECIFIC PERSISTENT SETTINGS - Methods and systems for deploying a processing resource in a configurable platform are described. A methods includes providing a specification that describes a configuration of a processing area network, the specification including (i) a number of processors for the processing area network (ii) a local area network topology defining interconnectivity and switching functionality among the specified processors of the processing area network, and (iii) a storage space for the processing area network. The specification further includes processing resource specific persistent settings. The method further includes allocating resources from the configurable platform to satisfy deployment of the specification, programming interconnectivity between the allocated resources and processing resources to satisfy the specification, and deploying the specification to a processing resource within the configurable deployment platform in response to software commands. The specification is used to generate the software commands to configure the platform and deploy processing resources corresponding to the specification. | 02-18-2010 |
20100043007 | MOBILE APPARATUS, A METHOD OF CONTROLLING A RATE OF OCCUPATION OF A RESOURCE OF A CPU - Provided is a mobile apparatus capable of stably executing an animating process even if an interrupting process occurs during execution of the animating process. The device includes a single CPU configured to execute the animating process at least including reproduction and recording of animated images in parallel with execution of a process other than the animating process and a resource control unit configured to control, in the case that an interruptive event occurs while the CPU is executing the animating process and the CPU executes the interrupting process simultaneously with occurrence of the interruptive event, the rate of occupation of a CPU resource allocated to execution of the interrupting process. | 02-18-2010 |
20100043008 | Scalable Work Load Management on Multi-Core Computer Systems - Embodiments of the presently claimed invention minimize the effect of Amdahl's Law with respect to multi-core processor technologies. This scheme is asynchronous across all of the cores of a processing system and is completely independent of other cores and other work units running on those cores. This scheme occurs on an as needed and just in time basis. As a result, the constraints of Amdahl's Law do not apply to a scheduling algorithm and the design is linearly scalable with the number of processing cores with no degradation due to the effects of serialization. | 02-18-2010 |
20100043009 | Resource Allocation in Multi-Core Environment - Embodiments of the presently claimed invention automatically and systematically schedule jobs in a computer system thereby optimizing job throughput while simultaneously minimizing the amount of time a job waits for access to a shareable resource in the system. Such embodiments may implement a methodology that continuously pre-conditions the profile of requests submitted to a job scheduler such that the resulting schedule for the dispatch of those jobs results in optimized use of available computer system resources. Through this methodology, the intersection of the envelope of available computer system shareable resources may be considered in the context of the envelope of requested resources associated with the jobs in the system input queue. By using heuristic policies, an arrangement of allocations of available resources against requested resources may be determined thereby maximizing resource consumption on the processing system. | 02-18-2010 |
20100050179 | LAYERED CAPACITY DRIVEN PROVISIONING IN DISTRIBUTED ENVIRONMENTS - Techniques are disclosed for providing mapping of application components to a set of resources in a distributed environment using capacity driven provisioning using a layered approach. By way of example, a method for allocating resources to an application comprises the following steps. A first data structure is obtained representing a post order traversal of a dependency graph for the application and associated containers with capacity requirements. A second data structure is obtained representing a set of resources, and associated with each resource is a tuple representing available capacity. A mapping of the dependency graph data structure to the resource set is generated based on the available capacity such that resources of the set of resources are allocated to the application. | 02-25-2010 |
20100050180 | METHOD AND SYSTEM FOR GREEN COMPUTING INTERCHANGE SWITCHING FUNCTION - Systems, methods, devices and program products are provided for enabling users of a computing system to measure and compare the green efficiency of a set of resources used in a computing task. With the use of this information, the user can select a desired set of resources to be employed in the computing task to minimize the environmental impact of computing tasks in relation to requirements. In some embodiments, the invention creates metrics for measuring the greenness of a computing task. The metrics are calculated through analysis of the resource computation, energy consumption, consequence of computation, and dimensional characteristics of a computing task. The metrics could be beneficial or other metrics that permit the user or a processing system to make scheduling and execution decisions. | 02-25-2010 |
20100050181 | Method and System of Group-to-Group Computing - A method and system of group-to-group (G2G) computing, a G2G computing service system based on the portal network site, and a G2G search service system based on the G2G computing. The G2G computing is a kind of distributed computing based on the G2G network and carries a task by the group. The network comprised by the groups and related to the relation between the groups is referred to as a G2G network. The group is a collection of nodes with the same attributes. The G2G computing defines 4 basis operations: Transfer, Exchange, Node-process and Transmutation. | 02-25-2010 |
20100058347 | DATA CENTER PROGRAMMING MODEL - An exemplary method includes hosting a service at a data center, the service relying on at least one software component developed according to a programming model and the data center comprising a corresponding programming model abstraction layer that abstracts resources of the data center; receiving a request for the service; and in response to the request, assigning at least some of the resources of the data center to the service to allow for fulfilling the request wherein the programming model abstraction layer performs the assigning based in part on reference to a resource class in the at least one software component, the resource class modifiable to account for changes in one or more resources of the data center. Various other devices, systems and methods are also described. | 03-04-2010 |
20100058348 | MEMORY MANAGEMENT FOR PREDICTION BY PARTIAL MATCHING CONTEXT MODELS - Techniques for resource management of a PPM context model are described herein. According to one embodiment, in response to a sequence of symbols to be coded, contexts are allocated, each having multiple entries and each entry representing a symbol that the current context is able to encode, including a counter value representing a frequency of each entry being used. For each symbol coded by a context, a local counter value and a global counter value are maintained. The global counter value represents a total number of symbols that have been coded by the context model and the local counter value represents a number symbols that have been coded by the respective context. Thereafter, a resource management operation is performed for system resources associated with the plurality of contexts based on a global counter value and a local counter value associated with each of the plurality of contexts. | 03-04-2010 |
20100058349 | System and Method for Efficient Machine Selection for Job Provisioning - A method for efficient machine selection for job provisioning includes receiving a job request to perform a job using an unspecified server machine and determining one or more job criteria needed to perform the job from the job request. The method further includes providing a list of one or more server machines potentially operable to perform the job. For each server machine on the list of one or more server machines, a utilization value, one or more job criteria satisfaction values, and an overall suitability value are determined. The overall suitability value for each server machine is determined from the one or more job criteria satisfaction values and the utilization value, and may include a numeric degree to which each server machine is suitable for performing the job. Furthermore, the overall suitability value for each server machine may be included on a list of one or more overall suitability values. | 03-04-2010 |
20100058350 | FRAMEWORK FOR DISTRIBUTION OF COMPUTER WORKLOADS BASED ON REAL-TIME ENERGY COSTS - Energy costs for conducting compute tasks at diverse data center sites are determined and are then used to route such tasks in a most efficient manner. A given compute task is first evaluated to predict potential energy consumption. The most favorable real-time energy costs for the task are determined at the various data center sites. The likely time period of the more favorable cost as well as the stability at the data center are additional factors. A workload dispatcher then forwards the selected compute task to the data center having the most favorable real-time energy costs. Among the criteria used to select the most favorable data center is a determination that the proposed center presently has the resources for the task. A stabilizer is utilized to balance the workload among the data centers. A computer implementation for performing the various steps of the cost determination and allocation is also described. | 03-04-2010 |
20100058351 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A system resource leak is reliably detected and released. The invention is an information processing apparatus which allocates/releases a system resource in response to a request from a process. The apparatus includes a unit configured to, when a request to allocate the system resource is sent, store an identifier which is assigned to a job including the process as a request source, and system resource information in a management table, a unit configured to, when a request to release the system resource is sent, delete the corresponding system resource information from the management table, a unit configured to, each time the job ends, refer to the management table to determine whether the management table stores an identifier assigned to the job, and a unit configured to, when it is determined that the management table stores the identifier, release the system resource specified by the corresponding system resource information. | 03-04-2010 |
20100064291 | System and Method for Reducing Execution Divergence in Parallel Processing Architectures - A method for reducing execution divergence among a plurality of threads executable within a parallel processing architecture includes an operation of determining, among a plurality of data sets that function as operands for a plurality of different execution commands, a preferred execution type for the collective plurality of data sets. A data set is assigned from a data set pool to a thread which is to be executed by the parallel processing architecture, the assigned data set being of the preferred execution type, whereby the parallel processing architecture is operable to concurrently execute a plurality of threads, the plurality of concurrently executable threads including the thread having the assigned data set. An execution command for which the assigned data functions as an operand is applied to each of the plurality of threads. | 03-11-2010 |
20100064292 | STORAGE DEVICE AND CONTROL METHOD THEREFOR - For betterment, by putting a virtual storage device into a suspend mode, physical resources are turned OFF on a virtual storage device basis. Moreover, control information and volume data of the virtual storage device are stored in any external volume, for example, and the resources that have been used by the virtual storage device are deallocated. At the time of resumption of operation, using any resources not in use, the virtual storage device is restored based on the control information in storage. When a change is made to a WWN on the side of a host, the storage device receives a WWN change notification from a management server, and makes settings again to a WWN table, thereby making it accessible from the host. | 03-11-2010 |
20100064293 | APPARATUS AND METHOD FOR MANAGING USER SCHEDULE - The present invention estimates a schedule of a user by collecting and analyzing information on a user-related work to be performed by accessing a schedule management program on the basis of corresponding user information when the user enters a region capable of using computing resources and executes a service application program that can perform the corresponding scheduled job through a virtual machine by automatically creating the virtual machine of a computing environment that can perform the estimated scheduled job. According to the present invention, a virtual machine is dynamically created so as to execute a work grasped as a work that the user must perform by analyzing a current schedule while access of the user and an application program for performing the corresponding work in the created virtual machine is automatically executed, such that user convenience is increased. | 03-11-2010 |
20100077400 | TASK-OPTIMIZING CALENDAR SYSTEM - A calendar system schedules tasks and meetings or other appointments for a user. The system retrieves a work capacity, which is information regarding the working hours for the user. The system further retrieves a plurality of enhanced tasks for the user. The system then optimizes a schedule for the user based on the work capacity and the enhanced tasks. | 03-25-2010 |
20100077401 | AUTOMATED IDENTIFICATION OF COMPUTING SYSTEM RESOURCES BASED ON COMPUTING RESOURCE DNA - Computing resource DNA associated with a computing resource of a computing system can be received. The computing resource DNA can include one or more computing resource DNA elements representing identifying characteristics of the computing resource. A set of one or more potential matches for the received computing resource DNA can be ascertained from a set of reference data. When one or more potential matches exist, a confidence factor can be calculated for each potential match. The set of potential matches can then be refined. An optimum match for the computing resource DNA can be determined from the set of refined potential matches. The computing resource DNA can then be identified as a representation of the computing resource associated with the optimum match. | 03-25-2010 |
20100077402 | Variable Scaling for Computing Elements - Various systems, methods, and computing units are provided for variable scaling of computing elements. In one representative embodiment, a method comprises: receiving a plurality of computing resource levels; and providing one of the plurality of computing resource levels to each of a plurality of computing elements, each computing element having an associated output, the provided voltage level based upon associated output significance. | 03-25-2010 |
20100077403 | Middleware for Fine-Grained Near Real-Time Applications - A centralized scheduling server for scheduling fine-grained near real-time applications includes network ports, a central managing application, functional library(ies) and service processes. One port communicates with processing nodes over a private computer network. Processing nodes include processing node report processor node status to the server and execute scheduled tasks. The other port communicates with user devices through a public network. The central managing application manages fine-grained near real-time application. The functional library provides middleware core functionality. The service processes include: a resource manager, a submitter to place tasks on a task queue; and a dispatcher to dispatch tasks to processing nodes. A work flow process runs an optimized scheduling algorithm. | 03-25-2010 |
20100083268 | Method And System For Managing Access To A Resource By A Process Processing A Media Stream - Methods, systems and computer program products are described for managing access to a resource. In one aspect, a method includes detecting, during processing of a first media stream by a first process for presentation, an association between a concurrency policy and a shared resource shareable with a second process, and then listening for a message providing access to the shared resource based on an evaluation of the concurrency policy. In response to receiving a message providing access to the shared resource, the method includes accessing the shared resource. | 04-01-2010 |
20100083269 | ALGORITHM FOR FAST LIST ALLOCATION AND FREE - A computer implemented method, a data processing system, and a computer usable recordable-type medium having a computer usable program code serializing list insertion and removal. An atomic operation free atomic list primitive call from a kernel service is received for the insertion or removal of a list element from a linked list. The atomic operation free atomic list primitive is a restartible routine selected from the list consisting of cpuget_from_list, cpuput_onto_list, cpuget_all_from_list, and cpuput_chain_onto_list. A processor begins execution of the atomic operation free atomic list primitive. If an interrupt is received during execution of the atomic operation free atomic list primitive, the interrupt handler will recognize the address of the executing program at the time of the interrupt and will over-write that address in the machine state save area, so that when the interrupted program is resumed, the entire sequence will be run again from the beginning. If an interrupt is not received during execution of the atomic operation free atomic list primitive interrupt hander, the processor finishes execution of the atomic operation free atomic list primitive. | 04-01-2010 |
20100083270 | RESOURCE CLASS BINDING FOR INDUSTRIAL AUTOMATION - An industrial control system is provided. The system includes a processing component to bind to a subset of resources from a set of potential industrial control resources. An attribute component defines a resource priority for the set of potential industrial control resources. A resource class component implements at least one instance of the potential industrial control resources, where the instance automatically selects the subset of resources in view of the resource priority. | 04-01-2010 |
20100083271 | Resource Property Aggregation In A Multi-Provider System - The present invention provides for resource property aggregation. A set of new instances is received from one or more providers. For each new instance in the set of new instances, a determination is made as to whether the new instance represents a same resource as at least one other instance. Responsive to determining that the new instance represents the same resource as another instance, a set of properties associated with the new instance and with the at least one other instance are identified. Each property from the new instance is compared to an associated property in the at least one other instance using a set of precedence rules. At least one property value is identified from either the new instance or the at least one other instance. An aggregate instance is then generated that represents the resource using the identified property values. | 04-01-2010 |
20100083272 | MANAGING POOLS OF DYNAMIC RESOURCES - Computer systems attempt to manage resource pools of a dynamic number of similar resources and work tasks in order to optimize system performance. Work requests are received into the resource pool having a dynamic number of resources instances. An instance-throughput curve is determined that relates a number of resource instances in the resource pool to throughput of the work requests. A slope of a point on the instance-throughput curve is estimated with stochastic gradient approximation. The number of resource instances for the resource pool is selected when the estimated slope of the instance-throughput curve is zero. | 04-01-2010 |
20100083273 | METHOD AND MEMORY MANAGER FOR MANAGING MEMORY - A memory managing method and memory manager for a multi processing environment are provided. The memory manager adjusts the number of processors assigned to a consumer process and/or an assignment unit size of data to be consumed by the consumer process based on a condition of a shared queue which is shared by a producer process producing data and the consumer process consuming the data. | 04-01-2010 |
20100088707 | Mechanism for Application Management During Server Power Changes - The present disclosure provides, in some embodiments, a method for managing applications and resources. According to some embodiments, a power orchestrator may comprise (a) receiving information handling system resource status, (b) receiving one or more application registrations from one or more applications to be executed on the information handling system, (c) formulating a resource priority schedule using the received resource status and the one or more application registrations, (d) formulating a resource allocation schedule in accordance with the resource priority schedule, (e) communicating the resource allocation schedule to the one or more applications, and (f) allocating one or more resources to the one or more applications in accordance with the resource allocation schedule. A method may comprise, according to some embodiments, determining whether one or more of the one or more applications will submit a registration update and/or determining whether available resource(s) match demand and adjusting resource status to match demand. | 04-08-2010 |
20100095300 | Online Computation of Cache Occupancy and Performance - Methods, computer programs, and systems for managing thread performance in a computing environment based on cache occupancy are provided. In one embodiment, a computer implemented method assigns a thread performance counter to threads being created to measure the number of cache misses for the threads. The thread performance counter is deduced in one embodiment based on performance counters associated with each core in a processor. The method further calculates a self-thread value as the change in the thread performance counter of a given thread during a predetermined period, and an other-thread value as the sum of all the changes in the thread performance counters for all threads except for the given thread. Further, the method estimates a cache occupancy for the given thread based on a previous occupancy for the given thread, and the calculated shelf-thread and other-thread values. The estimated cache occupancy is used to assign computing environment resources to the given thread. In another embodiment, cache miss-rate curves are constructed for a thread to help analyze performance tradeoffs when changing cache allocations of the threads in the system. | 04-15-2010 |
20100095301 | METHOD FOR PROVIDING SERVICE IN PERVASIVE COMPUTING ENVIRONMENT AND APPARATUS THEREOF - Provided is a method for providing a service in a pervasive computing environment that extracts a service type which can be provided by a resource searched in the corresponding environment and when a service type to be executed is selected in an application, the corresponding resource is allocated to the selected service to allow the corresponding application to execute the service by utilizing the allocated resource. Further, the allocated resource is locked and the corresponding resource is unlocked upon a request of another application. | 04-15-2010 |
20100095302 | DATA PROCESSING APPARATUS, DISTRIBUTED PROCESSING SYSTEM, DATA PROCESSING METHOD AND DATA PROCESSING PROGRAM - A terminal includes a task information acquiring unit which acquires information on a task of data processing, and a communication task generator which generates a send task to allow a source apparatus of data required by the task to transmit the data required by the task to an apparatus executing the task and which transmits the send task to the source apparatus, when the source apparatus is another apparatus, which is different from the apparatus executing the task and which is connected to the apparatus executing the task via a network. | 04-15-2010 |
20100100884 | LOAD BALANCING USING DISTRIBUTED PRINTING DEVICES - A system and method of distributing workflow in a document processing or other production environment determines a utilization percentage for each of a plurality of printing devices or other resources located in the production environment. For a first printing device, if the utilization percentage associated with the first printing device is below a threshold value, a request may be sent from the first printing device to a workflow distributor to obtain one or more unassigned jobs. If the request for the one or more unassigned jobs sent from the first printing device is received by the workflow distributor, the one or more unassigned jobs may be received at the first printing device. | 04-22-2010 |
20100100885 | TRANSACTION PROCESSING FOR SIDE-EFFECTING ACTIONS IN TRANSACTIONAL MEMORY - A processing system includes a transactional memory, first and second resource managers, and a transaction manager for a concurrent program having a thread including an atomic transaction having a side-effecting action. The first resource manager is configured to enlist in the atomic transaction and manage a resource related to the side effecting action. The second resource manager is configured to enlist in the atomic transaction and manage the transaction memory. The transaction manager is coupled to the first and second resource managers and manager is configured to receive a vote from the first and second resource managers as to whether to commit the transaction. The side-effecting action is postponed until after the transaction commits or applied along with a compensating action to the side-effecting action. | 04-22-2010 |
20100100886 | TASK GROUP ALLOCATING METHOD, TASK GROUP ALLOCATING DEVICE, TASK GROUP ALLOCATING PROGRAM, PROCESSOR AND COMPUTER - Even if a multiprocessor includes an uneven performance core, an inoperative core or a core that does not satisfy such a performance as designed but if the contrivance of task allocation can satisfy the requirement of an application to be executed, the multiple processors are shipped. In a task group allocation method for allocating, to a processor having a plurality of cores, task groups included in an application for the processor to execute, a calculation section measures performances and disposition patterns of the cores, generates a restricting condition associating the measured performances and disposition patterns of the cores with information indicating whether the application can be executed, and, with reference to the restricting condition, reallocates to the cores, the task groups that have previously been allocated to the cores. | 04-22-2010 |
20100100887 | METHOD AND DEVICE FOR ENCAPSULATING APPLICATIONS IN A COMPUTER SYSTEM FOR AN AIRCRAFT - The object of the invention is in particular a device for execution of applications ( | 04-22-2010 |
20100100888 | Resource allocation - A technique for executing a segmented virtual machine (VM) is disclosed. A plurality of core VM's is implemented in a plurality of core spaces. Each core VM is associated with one of a plurality of shell VM's. Resources of the core spaces are allocated among the core VM's. | 04-22-2010 |
20100107171 | COMPUTING TASK CARBON OFFSETING - Methods, systems, services and program products are provided for implementing carbon offset computing. During performance of a specified computing task data concerning resource consumption regarding that specified computing task is gathered and stored. Upon completion of the specified computing task, the amount of carbon offset required to compensate for resource consumption associated with performance of the completed specified computing task is calculated based upon stored or known resource consumption data. The calculated amount of carbon offset information may be transmitted to a carbon offset function provider, and a carbon offset function provider implements the specified amount of carbon offset based upon the calculated amounts communicated for the completed specified computing task. | 04-29-2010 |
20100107172 | System providing methodology for policy-based resource allocation - A system providing methodology for policy-based resource allocation is described. In one embodiment, for example, a system for allocating computer resources amongst a plurality of applications based on a policy is described that comprises: a plurality of computers connected to one another through a network; a policy engine for. specifying a policy for allocation of resources of the plurality of computers amongst a plurality of applications having access to the resources; a monitoring module at each computer for detecting demands for the resources and exchanging information regarding demands for the resources at the plurality of computers; and an enforcement module at each computer for allocating the resources amongst the plurality of applications based on the policy and information regarding demands for the resources. | 04-29-2010 |
20100107173 | Distributing resources in a market-based resource allocation system - Disclosed herein are representative embodiments of methods, apparatus, and systems for distributing a resource (such as electricity) using a resource allocation system. In one exemplary embodiment, a plurality of requests for electricity are received from a plurality of end-use consumers. The requests indicate a requested quantity of electricity and a consumer-requested index value indicative of a maximum price a respective end-use consumer will pay for the requested quantity of electricity. A plurality of offers for supplying electricity are received from a plurality of resource suppliers. The offers indicate an offered quantity of electricity and a supplier-requested index value indicative of a minimum price for which a respective supplier will produce the offered quantity of electricity. A dispatched index value is computed at which electricity is to be supplied based at least in part on the consumer-requested index values and the supplier-requested index values. | 04-29-2010 |
20100107174 | SCHEDULER, PROCESSOR SYSTEM, AND PROGRAM GENERATION METHOD - A scheduler for conducting scheduling for a processor system including a plurality of processor cores and a plurality of memories respectively corresponding to the plurality of processor cores includes: a scheduling section that allocates one of the plurality of processor cores to one of a plurality of process requests corresponding to a process group based on rule information; and a rule changing section that, when a first processor core is allocated to a first process of the process group, changes the rule information and allocates the first processor core to a subsequent process of the process group, and that restores the rule information when a second processor core is allocated to a final process of the process group. | 04-29-2010 |
20100115526 | METHOD AND APPARATUS FOR ALLOCATING RESOURCES IN A COMPUTE FARM - Some embodiments provide a system for allocating resources in a compute farm. During operation, the system can receive resource-requirement information for a project. Next, the system can receive a request to execute a new job in the compute farm. In response to determining that no job slots are available for executing the new job, and that the project associated with the new job has not used up its allocated job slots, the system may execute the new job by suspending or re-queuing a job that is currently executing, and allocating the freed-up job slot to the new job. If the system receives a resource-intensive job, the system may create dummy jobs, and schedule the dummy jobs on the same computer system as the resource-intensive job to prevent the queuing system from scheduling multiple resource-intensive jobs on the same computer system. | 05-06-2010 |
20100115527 | METHOD AND SYSTEM FOR PARALLELIZATION OF PIPELINED COMPUTATIONS - A method of parallelizing a pipeline includes stages operable on a sequence of work items. The method includes allocating an amount of work for each work item, assigning at least one stage to each work item, partitioning the at least one stage into at least one team, partitioning the at least one team into at least one gang, and assigning the at least one team and the at least one gang to at least one processor. Processors, gangs, and teams are juxtaposed near one another to minimize communication losses. | 05-06-2010 |
20100115528 | Software Defined Radio - A method for providing a division of SDR RA into operational states is described. The method includes, in a device which including a plurality of shared device resources and a plurality of RAs, receiving, from a first RA, a request to change a state of the first RA to a requested active state. The requested active state is one of a plurality of potential active states for the first RA and each potential active state has an associated set of device resource requirements. The method also includes determining whether sufficient device resources exist for the requested active state based at least in part on currently allocated device resources. In response to a determination that sufficient device resources exist, the change to the requested active state for the first RA is approved. Apparatus and computer readable media are also described. | 05-06-2010 |
20100122261 | APPLICATION LEVEL PLACEMENT SCHEDULER IN A MULTIPROCESSOR COMPUTING ENVIRONMENT - A multiprocessor computer system program scheduler comprises an application-level placement scheduler module that is operable to receive requests for resources in a multiprocessor computer system, operable to manage processing node resource availability data; operable to reserve processing node resources for specific applications based on the received requests for resources and the processing node resource availability data; and operable to reclaim processing node resources reserved for specific applications upon application termination. | 05-13-2010 |
20100122262 | Method and Apparatus for Dynamic Allocation of Processing Resources - A method and apparatus for dynamic allocation of processing resources and tasks, including multimedia tasks. Tasks are queued, available processing resources are identified, and the available processing resources are allocated among the tasks. The available processing resources are provided with functional programs corresponding to the tasks. The tasks are performed using the available processing resources to produce resulting data, and the resulting data is passed to an input/output device. | 05-13-2010 |
20100125851 | APPARATUS, METHOD, AND SYSTEM TO PROVIDE A MULTI-CORE PROCESSOR FOR AN ELECTRONIC GAMING MACHINE (EGM) - An electronic gaming machine (EGM) implements a multi-core processor. A first of the processor cores is adapted to perform or otherwise control a first set of operations. The first set of operations can include, for example, game manager operations and other operations of the EGM that are more time-sensitive. A second one of the processor cores is adapted to perform or otherwise control a second set of operations. The second set of operations can include, for example, operations related to multimedia presentation associated with the running/playing of a game and/or other operations of the EGM that are not time-sensitive or are otherwise less time-sensitive than the operations performed/controlled by the first processor core. Each of the processor cores may run an operating system that matches the needs of its respective processor core. | 05-20-2010 |
20100131956 | METHODS AND SYSTEMS FOR MANAGING PROGRAM-LEVEL PARALLELISM - Methods and systems for managing program-level parallelism in a multi-core processor environment are provided. The methods for managing parallel execution of processes associated with computer programs include providing an agent process in an application space, which is operatively coupled to an operating system having a kernel configured to determine processor configuration information. The application space may be a runtime environment or a user space of the operating system, and has a lower privilege level than the kernel. The agent process retrieves the processor configuration information from the kernel, and after receiving a request for the processor configuration information from application processes running in the application space, the agent process provides a response to the requesting application process. The agent process may also generate resource availability data based on the processor configuration information, and the application processes may initiate a thread based on the resource availability data. | 05-27-2010 |
20100131957 | VIRTUAL COMPUTER SYSTEM AND ITS OPTIMIZATION METHOD - Optimization of resource allocation in a virtual computer system is efficiently performed according to a method consistent with a virtualization design concept. The virtual computer system includes a plurality of virtual devices that share the physical resources of a computer and execute an application, a virtualization section that manages the plurality of virtual devices, and a management section that controls the virtualization section. The plurality of virtual devices set allocation of physical resources to the applications by a first optimization calculation using resource supply information from the management section and transmit resource request information corresponding to the resource allocation setting to the management section. The management section sets allocation of the physical resources to the virtual devices by a second optimization calculation using the resource request information from the plurality of virtual devices and transmits resource supply information corresponding to the resource allocation setting to the plurality of virtual devices. While the resource supply information and the resource request information are exchanged between the plurality of virtual devices and the management section, the first and second optimization calculations are performed, thereby dynamically allocating the physical resources. | 05-27-2010 |
20100131958 | Method, A Mechanism and a Computer Program Product for Executing Several Tasks in a Multithreaded Processor - The invention relates a method for executing several tasks in a multithreaded (MT) processor, each task having, for every hardware shared resource from a predetermined set of hardware shared resources in the MT processor, one associated artificial time delay that is introduced when said task accesses said hardware shared resource, the method comprising step (a) of establishing, for every hardware shared resource and each task to be artificially delayed, the artificial delay to be applied to each access of said task to said hardware shared resource; step (b) of performing the following steps (b | 05-27-2010 |
20100138840 | SYSTEM AND METHOD FOR ACCELERATING INPUT/OUTPUT ACCESS OPERATION ON A VIRTUAL MACHINE - A system and method for accelerating input/output (IO) access operation on a virtual machine, The method comprises providing a smart IO device that includes an unrestricted command queue (CQ) and a plurality of restricted CQs and allowing a guest domain to directly configure and control IO resources through a respective restricted CQ, the IO resources allocated to the guest domain. In preferred embodiments, the allocation of IO resources to each guest domain is performed by a privileged virtual switching element. In some embodiments, the smart IO device is a HCA and the privileged virtual switching element is a Hypervisor. | 06-03-2010 |
20100146513 | Software-based Thread Remapping for power Savings - On a multi-core processor that supports simultaneous multi-threading, the power state for each logical processor is tracked. Upon indication that a logical processor is ready to transition into a deep low power state, software remapping (e.g., thread-hopping) may be performed. Accordingly, if multiple logical processors, on different cores, are in a low-power state, they are re-mapped to same core and the core is then placed into a low power state. Other embodiments are described and claimed. | 06-10-2010 |
20100146514 | TEST MANAGEMENT SYSTEM AND METHOD - An execution management method includes providing an execution plan, balancing an execution load across a plurality of servers, automatically interpreting the execution plan, and re-driving a failed test to another of the plurality of servers if the test case fails on an originally selected available server. The execution plan includes a plurality of test cases and criteria corresponding to the test cases. More than one of the plurality of test cases may be run on each of the plurality of servers at a same time in parallel. Each of the plurality of servers is run independently. | 06-10-2010 |
20100146515 | Support of Non-Trivial Scheduling Policies Along with Topological Properties - A system and method for scheduling jobs in a multiprocessor machine is disclosed. The status of resources, including CPUs on node boards and associated shared memory in the multiprocessor machine is periodically determined. The status can indicate the resources available to execute jobs. This information is accumulated by the topology-monitoring unit and provided to the topology library. The topology library also receives a candidate host list from the scheduling unit which lists all of the resources available to execute the job being scheduled based on non-trivial scheduling. The topology library unit then uses this to generate a free map F indicative of the interconnection of the resources available to execute the job. The topology monitoring unit then matches the jobs to the resources available to execute the jobs, based on resource requirements including shape requirements indicative of interconnections of resources required to execute the job. The topology monitoring unit dispatches the job to the portion of the free map F which match the shape requirements of the job. If the topology library unit determines that no resources are available to execute the job, the topology library unit will return the job to the scheduling unit and the scheduling unit which will wait until the resources become available. The free map F may include resources which have been suspended or reserved in previous scheduling cycles, provided the job to be scheduled satisfies the predetermined criteria for execution of the job on the suspended, have a lower priority, or are reserved resources. | 06-10-2010 |
20100153958 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR APPLYING CONDITIONAL RESOURCE THROTTLES TO FACILITATE WORKLOAD MANAGEMENT IN A DATABASE SYSTEM - A system, method, and computer-readable medium that facilitate workload management in a computer system are provided. A workload's system resource consumption is adjusted against a target consumption level thereby facilitating maintenance of the consumption to the target consumption within an averaging interval by dynamically controlling workload concurrency levels. System resource consumption is compensated during periods of over or under-consumption by adjusting workload consumption to a larger averaging interval. Further, mechanisms for limiting, or banding, dynamic concurrency adjustments to disallow workload starvation or unconstrained usage at any time are provided. Disclosed mechanisms provide for category of work prioritization goals and subject-area resource division management goals, allow for unclaimed resources due to a lack of demand from one workload to be used by active workloads to yield full system utilization at all times, and provide for monitoring success in light of the potential relative effects of workload under-demand, and under/over-consumption management. | 06-17-2010 |
20100153959 | CONTROLLING AND DYNAMICALLY VARYING AUTOMATIC PARALLELIZATION - A system and method for automatically controlling run-time parallelization of a software application. A buffer is allocated during execution of program code of an application. When a point in program code near a parallelized region is reached, demand information is stored in the buffer in response to reaching a predetermined first checkpoint. Subsequently, the demand information is read from the buffer in response to reaching a predetermined second checkpoint. Allocation information corresponding to the read demand information is computed and stored the in the buffer for the application to later access. The allocation information is read from the buffer in response to reaching a predetermined third checkpoint, and the parallelized region of code is executed in a manner corresponding to the allocation information. | 06-17-2010 |
20100153960 | METHOD AND APPARATUS FOR RESOURCE MANAGEMENT IN GRID COMPUTING SYSTEMS - A method for resource management in grid computing systems includes defining user's demands on execution of a task as SLA (Service Level Agreements) information; monitoring states of resources in a grid to store the states as resource state information; calculating for each resource in the grid, based on the resource state information, an expected completion time of the task and an expected profit to be obtained by completing the task; creating an available resource cluster by using the expected execution time and the expected profit; and determining, if the SLA information is satisfied by the available resource cluster, a task processing policy for executing the task by using at least one resource in the available resource cluster. The available resource cluster is a set of resources having the expected completion time within a deadline of the task and the expected profit being positive. | 06-17-2010 |
20100153961 | STORAGE SYSTEM HAVING PROCESSOR AND INTERFACE ADAPTERS THAT CAN BE INCREASED OR DECREASED BASED ON REQUIRED PERFORMANCE - A storage system is comprised of an interface unit | 06-17-2010 |
20100162256 | OPTIMIZATION OF APPLICATION POWER CONSUMPTION AND PERFORMANCE IN AN INTEGRATED SYSTEM ON A CHIP - A method for determining an operating point of a shared resource. The method includes receiving indications of access demand to a shared resource from each of a plurality of functional units and determining a maximum access demand from among the plurality of functional units based on their respective indications. The method further includes determining a required operating point of the shared resource based on the maximum access demand, wherein the shared resource is shared by each of the plurality of functional units, comparing the required operating point to a present operating point of the shared resource, and changing to the required operating point from the present operating point if the required and present operating points are different. | 06-24-2010 |
20100162257 | METHOD AND APPARATUS FOR PROVIDING RESOURCE ALLOCATION POLICY - A method and apparatus for providing a resource allocation policy in a network are disclosed. For example, the method constructs a queuing model for each application. The method defines a utility function for each application and for each transaction type of each application, and defines an overall utility in a system. The method performs an optimization to identify an optimal configuration that maximizes the overall utility for a given workload, and determines one or more adaptation policies for configuring the system in accordance with the optimal configuration. | 06-24-2010 |
20100162258 | ELECTRONIC SYSTEM WITH CORE COMPENSATION AND METHOD OF OPERATION THEREOF - A method of operation of an electronic system is provided including operating an integrated circuit device having a first core and a second core; detecting a first latency value between the first core and the second core; storing the first latency value in the first core; and compensating for the first latency value in the first core for a first transfer between the first core and the second core. | 06-24-2010 |
20100162259 | VIRTUALIZATION-BASED RESOURCE MANAGEMENT APPARATUS AND METHOD AND COMPUTING SYSTEM FOR VIRTUALIZATION-BASED RESOURCE MANAGEMENT - A computing system for virtualization-based resource management includes a plurality of physical machines, a plurality of virtual machines and a management virtual machine. The virtual machines are configured by virtualizing each of the plurality of physical machines. The management virtual machine is located at any one of the plurality physical machines. The management virtual machine monitors amounts of network resources utilized by the plurality of physical machines and time costs of the plurality of virtual machines, and performs a resource reallocation and a resource reclamation. | 06-24-2010 |
20100169891 | METHOD AND APPARATUS FOR LOCATING LOAD-BALANCED FACILITIES - A method and apparatus for providing a facility location plan for a network with a V-shaped facility cost are disclosed. For example, the method receives an event from a queue, wherein the event comprises an open event or a tight event. The method connects a plurality of adjacent clients to a facility, if the event comprises the open event, and adds a new client-facility edge to a graph comprising a plurality of client-facility edges, if the event comprises the tight event. | 07-01-2010 |
20100175068 | LIMITING THE AVAILABILITY OF COMPUTATIONAL RESOURCES TO A DEVICE TO STIMULATE A USER OF THE DEVICE TO APPLY NECESSARY UPDATES - Provided are a method, system, and article of manufacture for limiting the availability of computational resources to a device to stimulate a user of the device to apply necessary updates. Indication of the n update to the device is received and a determination is made as to whether the update has been applied to the device. The availability of computational resources at the device to use to execute processes at the device are limited in response to determining that the update has not been applied to the device. Processes are executed at the device using the limited available computational resources after the limiting of the availability of the computational resources. A determination is made as to whether the update has been applied to the device after limiting the availability of the computational resources. The limiting of the availability of the computational resources at the device is reversed in response to determining that the update to the device was applied. | 07-08-2010 |
20100175069 | DATA PROCESSING DEVICE, SCHEDULER, AND SCHEDULING METHOD - The present invention comprises: a unit time calculating unit for calculating, as a unit time, the greatest common denominator of the individual operating cycles of a plurality of programs; an allocating unit for allocating the individual operating cycles of the plurality of programs into each of a plurality of continuous base periods that each have their respective unit times, in sequence beginning with the shortest operating cycle, and for allocating the operating cycles of remaining programs for which the operations have not been completed during one of the plurality of base periods into remaining base periods, in sequence beginning with the shortest operating cycles; and an operating unit for running the plurality of programs that are allocated to operating times. | 07-08-2010 |
20100180280 | SYSTEM AND METHOD FOR BATCH RESOURCE ALLOCATION - A system for configuring resources in an environment for use by at least one process. In one embodiment, the system includes: (1) a process sorter configured to rank the at least one process based on numbers of resources that steps in the at least one process can use, (2) an optimizer coupled to the process sorter and configured to employ an optimization heuristic to accumulate feasible allocations of resources to the steps based on the ranking of the at least one process, (3) a resource sorter coupled to the optimizer and configured to rank the resources in a non-decreasing order based on numbers of the steps in which the resources can be used, the optimizer further configured to remove one of the resources from consideration based on the ranking of the resources until infeasibility occurs and (4) an environment configuration interface configured to allow the environment to be configured in accordance with remaining ones of the resources. | 07-15-2010 |
20100186017 | System and method for medical image processing - An embodiment of the present invention provides a system and method for medical image processing. The proposed system includes a grid computing framework adapted for receiving patient data including one or more patient-scan images from an end-user application, and for scheduling image processing tasks to a plurality of nodes of a grid computing network. Each of the nodes includes a central processing unit and at least one of the nodes includes programmable graphics processing unit hardware. The proposed system further includes a second framework for image processing using graphics processing unit that is operative on each node of the network. The second framework operative on any node is adapted to execute the image processing task scheduled to that node based upon the availability of graphics processing unit hardware in that node. When graphics processing unit hardware is available in the node, the second framework is adapted to execute the task on the graphics processing unit of the node using stream computation. When graphics processing unit hardware is not available in the node, the second framework is adapted to execute the task on the central processing unit of the node. | 07-22-2010 |
20100186018 | OFF-LOADING OF PROCESSING FROM A PROCESSOR BADE TO STORAGE BLADES - A processor blade determines whether a selected processing task is to be off-loaded to a storage blade for processing. The selected processing task is off-loaded to the storage blade via a planar bus communication path, in response to determining that the selected processing task is to be off-loaded to the storage blade. The off-loaded selected processing task is processed in the storage blade. The storage blade communicates the results of the processing of the off-loaded selected processing task to the processor blade. | 07-22-2010 |
20100186019 | DYNAMIC RESOURCE ADJUSTMENT FOR A DISTRIBUTED PROCESS ON A MULTI-NODE COMPUTER SYSTEM - A method dynamically adjusts the resources available to a processing unit of a distributed computer process executing on a multi-node computer system. The resources for the processing unit are adjusted based on the data other processing units handle or the execution path of code in an upstream or downstream processing unit in the distributed process or application. | 07-22-2010 |
20100192155 | SCHEDULING FOR PARALLEL PROCESSING OF REGIONALLY-CONSTRAINED PLACEMENT PROBLEM - Scheduling of parallel processing for regionally-constrained object placement selects between different balancing schemes. For a small number of movebounds, computations are assigned by balancing the placeable objects. For a small number of objects per movebound, computations are assigned by balancing the movebounds. If there are large numbers of movebounds and objects per movebound, both objects and movebounds are balanced amongst the processors. For object balancing, movebounds are assigned to a processor until an amortized number of objects for the processor exceeds a first limit above an ideal number, or the next movebound would raise the amortized number of objects above a second, greater limit. For object and movebound balancing, movebounds are sorted into descending order, then assigned in the descending order to host processors in successive rounds while reversing the processor order after each round. The invention provides a schedule in polynomial-time while retaining high quality of results. | 07-29-2010 |
20100192156 | TECHNIQUE FOR CONSERVING SOFTWARE APPLICATION RESOURCES - Systems and methods of adjusting allocated hardware resources to support a running software application are disclosed. A system includes adjustment logic to adjust an allocation of a first hardware resource to support a running software application. Measurement logic measures at least one hardware resource metric associated with the first hardware resource. Service level logic calculates an application service level based on the measured at least one hardware resource metric. When the first application service level satisfies a threshold application service level, the allocation of the first hardware resource is iteratively reduced to reach a reduced allocation level where the application service level does not satisfy the threshold application service level. In response thereto, the allocation of the first hardware resource is increased by an increment, such that the application service level again satisfies the threshold application service level. | 07-29-2010 |
20100192157 | On-Demand Compute Environment - An on-demand compute environment comprises a plurality of nodes within an on-demand compute environment available for provisioning and a slave management module operating on a dedicated node within the on-demand compute environment, wherein upon instructions from a master management module at a local compute environment, the slave management module modifies at least one node of the plurality of nodes. | 07-29-2010 |
20100199285 | VIRTUAL MACHINE UTILITY COMPUTING METHOD AND SYSTEM - An analytics engine receives real-time statistics from a set of virtual machines supporting a line of business (LOB) application. The statistics relate to computing resource utilization and are used by the analytics engine to generate a prediction of demand for the LOB application in order to dynamically control the provisioning of virtual machines to support the LOB application. | 08-05-2010 |
20100205608 | Mechanism for Managing Resource Locking in a Multi-Threaded Environment - A mechanism is disclosed for implementing resource locking in a massively multi-threaded environment. The mechanism receives from a stream a request to obtain a lock on a resource. In response, the mechanism determines whether the resource is currently locked. If so, the mechanism adds the stream to a wait list. At some point, based upon the wait list, the mechanism determines that it is the stream's turn to lock the resource; thus, the mechanism grants the stream a lock. In this manner, the mechanism enables the stream to reserve and to obtain a lock on the resource. By implementing locking in this way, a stream is able to submit only one lock request. When it is its turn to obtain a lock, the stream is granted that lock. This lock reservation methodology makes it possible to implement resource locking efficiently in a massively multi-threaded environment. | 08-12-2010 |
20100211956 | METHOD AND SYSTEM FOR CONTINUOUS OPTIMIZATION OF DATA CENTERS BY COMBINING SERVER AND STORAGE VIRTUALIZATION - The invention provides a method and system for continuous optimization of a data center. The method includes monitoring loads of storage modules, server modules and switch modules in the data center, detecting an overload condition upon a load exceeding a load threshold, combining server and storage virtualization to address storage overloads by planning allocation migration between the storage modules, to address server overloads by planning allocation migration between the server modules, to address switch overloads by planning allocation migration mix between server modules and storage modules for overload reduction, and orchestrating the planned allocation migration to reduce the overload condition in the data center. | 08-19-2010 |
20100211957 | SCHEDULING AND ASSIGNING STANDARDIZED WORK REQUESTS TO PERFORMING CENTERS - Techniques for allocating work requests to performing centers include generating options for assigning the work requests to the performing centers. The options are based upon predetermined historical factors capturing work request characteristics and performing center characteristics. For each of the options, the work requests are scheduled to determine a corresponding duration of the work requests, and an overall cost is computed. One of the options is selected based on the overall cost and the corresponding duration. | 08-19-2010 |
20100218192 | SYSTEM AND METHOD TO ALLOCATE RESOURCES IN SERVICE ORGANIZATIONS WITH NON-LINEAR WORKFLOWS - A method can include determining a number of cases received (e.g., a case load), a number of cases processed (e.g., a case rate), and dividing the case load by the case rate. The resource demand can be compared to a resource allocation, and the resource allocation can be changed based upon the resource demand. A information handling system can include a processor and a memory. The memory can have code stored therein, wherein the code can include instructions, which, when executed by the processor, allows the information handling system to perform part or substantially all of the method. | 08-26-2010 |
20100218193 | RESOURCE ALLOCATION FAILURE RECOVERY MODULE OF A DISK DRIVER - A method of resource allocation failure recovery is disclosed. The method generally includes steps (A) to (E). Step (A) may generate a plurality of resource requests from a plurality of driver modules to a manager module executed by a processor. Step (B) may generate a plurality of first calls from the manager module to a plurality of allocation modules in response to the resource requests. Step (C) may allocate a plurality of resources to the driver modules using the allocation modules in response to the first calls. Step (D) may allocate a portion of a memory pool to a particular recovery packet using the manager module in response to the allocation modules signaling a failed allocation of a particular one of the resources. Step (E) may recover from the failed allocation using the particular recovery packet. | 08-26-2010 |
20100218194 | SYSTEM AND METHOD FOR THREAD SCHEDULING IN PROCESSORS - A method for controlling a data processing system, a data processing system executing a similar method, and a computer readable medium with instructions for a similar method. The method includes receiving, by an operating system executing on a data processing system, an execution request from an application, the execution request including at least one resource-defining attribute corresponding to an execution thread of the application. The method also includes allocating processor resources to the execution thread by the operating system according to the at least one resource-defining attribute, and allowing execution of the execution thread on the data processing system according to the allocated processor resources. | 08-26-2010 |
20100223619 | VISUALIZATION-CENTRIC PERFORMANCE-BASED VOLUME ALLOCATION - A method, system, and computer program product for visualization-centric performance-based volume allocation in a data storage system using a processor in communication with a memory device is provided. A unified resource graph representative of a global hierarchy of storage components in the data storage system, including each of a plurality of storage controllers, is generated. The unified resource graph includes a common root node and a plurality of subtree nodes corresponding to each of a plurality of nodes internal to the plurality of storage controllers. The common root node and the plurality of subtree nodes are ordered in a top-down orientation. Scalable volume provisioning of an existing or new workload amount by graphical manipulation of at least one of the storage components represented by the unified resource graph is performed based on an input. | 09-02-2010 |
20100223620 | SMART RECOVERY OF ASYNCHRONOUS PROCESSING - Systems, methods, and computer program products are described that are capable of recovering an asynchronous process after an error occurs with respect to the process. For example, the process may be re-initiated upon detection of the error. The re-initiated process is capable of not repeating tasks of the process that were completed prior to the occurrence of the error. | 09-02-2010 |
20100229175 | Moving Resources In a Computing Environment Having Multiple Logically-Partitioned Computer Systems - As needs of a computer system grow, further logically-partitioned computer systems may be added to allow for more partitions to be created. When new partitions are added, or when an entire computing environment analysis is commenced, it may be discovered that better system efficiency may be had if the resources or computational work in a first partition in a first computer is moved to a second partition in the first computer. It is also may be determined that better system efficiency may be had if the resources or computational work in the first partition in the first computer is moved to a third partition in a second computer. | 09-09-2010 |
20100229176 | Distribute Accumulated Processor Utilization Charges Among Multiple Threads - A utilization analyzer acquires accumulator values from multiple accumulators. Each accumulator corresponds to a particular processor thread and also corresponds to a particular processor utilization resource register (PURR). The utilization analyzer identifies, from the multiple accumulators, a combination of equal accumulators that each includes a largest accumulator value. Next, the utilization analyzer selects a subset of processor utilization resource registers from a combination of processor utilization resource registers that correspond to the combination of equal accumulators. The subset of processor utilization resource registers omits at least one processor utilization resource register from the combination of utilization resource registers. In turn, the utilization analyzer increments each of the subset of utilization resource registers. | 09-09-2010 |
20100229177 | Reducing Remote Memory Accesses to Shared Data in a Multi-Nodal Computer System - Disclosed is an apparatus, method, and program product for identifying and grouping threads that have interdependent data access needs. The preferred embodiment of the present invention utilizes two different constructs to accomplish this grouping. A Memory Affinity Group (MAG) is disclosed. The MAG construct enables multiple threads to be associated with the same node without any foreknowledge of which threads will be involved in the association, and without any control over the particular node with which they are associated. A Logical Node construct is also disclosed. The Logical Node construct enables multiple threads to be associated with the same specified node without any foreknowledge of which threads will be involved in the association. While logical nodes do not explicitly identify the underlying physical nodes comprising the system, they provide a means of associating particular threads with the same node and other threads with other node(s). | 09-09-2010 |
20100229178 | STREAM DATA PROCESSING METHOD, STREAM DATA PROCESSING PROGRAM AND STREAM DATA PROCESSING APPARATUS - Once data stagnation occurs in a query group which groups queries, a scheduler of a server apparatus calculates an estimated load value of each query forming the query group based on at least one of input flow rate information and latency information of the query. The scheduler divides the queries of the query group into a plurality query groups so that the sum of estimated load values of queries belonging to one query group becomes substantially equal to the sum of estimated load values of queries belonging to another query group. The divided query groups are reallocated to different processors respectively. Throughput in query processing of stream data in a stream data processing system can be improved. | 09-09-2010 |
20100229179 | SYSTEM AND METHOD FOR SCHEDULING THREAD EXECUTION - A method is described that comprises suspending a currently executing thread at a periodic time interval, calculating a next time slot during which the currently executing thread is to resume execution, appending the suspended thread to a queue of threads scheduled for execution at the calculated time slot, and updating an index value of a pointer index to a next sequential non-empty time slot, where the pointer index references time slots within an array of time slots, and where each of the plurality of time slots corresponds to a timeslice during which CPU resources are allocated to a particular thread. The method further comprises removing any contents of the indexed non-empty time slot and appending the removed contents to an array of threads requesting immediate CPU resource allocation and activating the thread at the top of the array of threads requesting immediate CPU resource allocation as a currently running thread. | 09-09-2010 |
20100235843 | IMPROVEMENTS RELATING TO DISTRIBUTED COMPUTING - There is provided a computer-implemented method of allocating a task to a set of distributed computing resources ( | 09-16-2010 |
20100235844 | DISCOVERING AND IDENTIFYING MANAGEABLE INFORMATION TECHNOLOGY RESOURCES - Allocating resource discovery and identification processes among a plurality of management tools and resources in a distributed and heterogeneous information technology (IT) management system by providing at least one authoritative manageable resource having minimal or no responsibility for reporting its identity, minimal or no responsibility for advertising any lifecycle-related creation event for the resource, and minimal or no responsibility for advertising any lifecycle-related destruction event for the resource. A services oriented architecture (SOA) defines one or more services needed to manage the resource within the management system. A component model defines one or more interfaces and one or more interactions to be implemented by the manageable resource within the management system. | 09-16-2010 |
20100242042 | Method and apparatus for scheduling work in a stream-oriented computer system - An apparatus and method for scheduling stream-based applications in a distributed computer system includes a scheduler configured to schedule work using three temporal levels. Each temporal level includes a method. A macro method is configured to schedule jobs that will run, in a highest temporal level, in accordance with a plurality of operation constraints to optimize importance of work. A micro method is configured to fractionally allocate, at a medium temporal level, processing elements to processing nodes in the system to react to changing importance of the work. A nano method is configured to revise, at a lowest temporal level, fractional allocations on a continual basis. | 09-23-2010 |
20100242043 | Computer-Implemented Systems For Resource Level Locking Without Resource Level Locks - Computer-implemented systems and methods regulate access to a plurality of resources in a pool of resources without requiring individual locks associated with each resource. Access to one of the plurality of resources is requested, where a resource queue for managing threads waiting to access a resource is associated with each of the plurality of resources. A resource queue lock associated with the resource is acquired, where a resource queue lock is associated with multiple resources. | 09-23-2010 |
20100242044 | ADAPTABLE SOFTWARE RESOURCE MANAGERS BASED ON INTENTIONS - User intentions can be derived from observations of user actions or they can be programmatically specified by an application or component that is performing an action. The intentions can then be utilized to adjust the operation of resource managers to better suit the actions being performed by the user or application, especially if such actions are not “typical”. Resource managers can inform a centralized intention manager of environmental constraints, including constraints on the resources they manage and constraints on their operation, such as various, pre-programmed independent modes of operation optimized for differencing circumstances. The intention manager can then instruct the resource managers in accordance with these environmental constraints when the intention manager is made aware of the intentions. If no further optimization can be achieved, specified intentions may not result in directives from the intention manager to the resource managers. | 09-23-2010 |
20100242045 | METHOD AND SYSTEM FOR ALLOCATING A DISTRIBUTED RESOURCE - A method for migrating a virtual machine executing on a host. The method involves monitoring, by a monitoring agent connected to a device driver, hosts in a network, wherein the device driver is connected to a network interface card, determining a virtual machine to be migrated based on a virtual machine policy, sending, by the host, a request to migrate to at least one of a plurality of target hosts in the network, receiving an acceptance to the request to migrate from at least one of the plurality of target hosts, determining, by the monitoring agent, a chosen target host to receive the virtual machine based on a migration policy, wherein the chosen target host is one of the at least one target hosts that sent the acceptance, sending a confirmation and historical information to the chosen target host, and migrating the virtual machine to the chosen target host. | 09-23-2010 |
20100242046 | MULTICORE PROCESSOR SYSTEM, SCHEDULING METHOD, AND COMPUTER PROGRAM PRODUCT - A multicore processor system includes: a plurality of software units, each of which executes predetermined processing using one or more cores among a plurality of cores of a multicore processor; and a scheduler that performs adjustment of allocation of the cores of the multicore processor to each of the software units and core occupation time of each of the software units to cause the software units to operate in parallel. Each of the software units outputs execution result data of the predetermined processing to an output buffer and issues notification based on an accumulated amount of the execution result data, which is output to the output buffer by the software unit, to the scheduler. The scheduler adjusts, based on the received notification, any one of a number of cores allocated to each of the software units and core occupation time of each of the software units or both. | 09-23-2010 |
20100242047 | DISTRIBUTED PROCESSING SYSTEM, CONTROL UNIT, AND CLIENT - A distributed processing system includes a client that makes a request for execution of a service requested by a user, a processing element, a control unit connected with the client and the processing element. The control unit has control functions for controlling the distributed processing system, and the client has at least one control function that is same as one of the control functions of the control unit. With respect to at least one control function that both the control unit and the client have, at least one of the control function of the control unit and the control function of the client is selected to execute a control. | 09-23-2010 |
20100242048 | RESOURCE ALLOCATION SYSTEM - The present provides a resource allocation system, including providing a workstation session manager in a workstation, coupling a resource schedule manager to the workstation session manager, coupling a disk drive storage system to the resource schedule manager, and provisioning a workflow process on the desk drive storage system utilizing the resource schedule manager. | 09-23-2010 |
20100251252 | POLICY MANAGEMENT FRAMEWORK IN MANAGED SYSTEMS ENVIRONMENT - A method, system, and computer program product for implementing policies in a managed systems environment is provided. A plurality of the heterogeneous entities is organized into a system resource group (SRG). Each of the plurality of heterogeneous entities is visible to an application operable on the managed systems environment. The system resource group is subject to at least one membership requirement, defines a relationship between at least two of the heterogeneous entities, contains at least one policy defining an operation as to be performed on the system resource group for a domain of the managed systems environment, and defines at least a portion of a policy framework between the system resource group and an additional system resource group organized from an additional plurality of the heterogeneous entities. The system resource group expands according to an action performed incorporating the relationship, policy, or policy framework. | 09-30-2010 |
20100251253 | PRIORITY-BASED MANAGEMENT OF SYSTEM LOAD LEVEL - Systems, methods, and computer program products are described herein for managing computer system resources. A plurality of modules (e.g., virtual machines or other applications) may be allocated across multiple computer system resources (e.g., processors, servers, etc.). Each module is assigned a priority level. Furthermore, a designated utilization level is assigned to each resource of the computer system. Each resource supports one or more of the modules, and prioritizes operation of the supported modules according to the corresponding assigned priority levels. Furthermore, each resource maintains operation of the supported modules at the designated utilization level. | 09-30-2010 |
20100251254 | INFORMATION PROCESSING APPARATUS, STORAGE MEDIUM, AND STATE OUTPUT METHOD - An apparatus for controlling divided operation environments includes a first acquiring unit that acquires a first processing amount indicating an amount of hardware resources allocated to each of the operation environments, a second acquiring unit that acquires a second processing amount which varies depending on an application program executed by the operation environment, a calculating unit that calculates a third processing amount of each of the operation environments on the basis of a difference between the first processing amount of each operation environment acquired by the first acquiring unit and the second processing amount of each operation environment acquired by the second acquiring unit; and an output unit that outputs a state of each of the operation environments on the basis of the third processing amount of each operation environment calculated by the calculating unit and the second processing amount of each operation environment acquired by the second acquiring unit. | 09-30-2010 |
20100251255 | SERVER DEVICE, COMPUTER SYSTEM, RECORDING MEDIUM AND VIRTUAL COMPUTER MOVING METHOD - A server device which operates a plurality of virtual computers so as to respectively correspond to a plurality of terminal devices to which physical devices are connected, the server device includes a judging unit that judges whether move of each of the plurality of virtual computers to each of the plurality of terminal devices is possible; a moving unit that moves one corresponding virtual computer to one terminal device move of the corresponding virtual computer to which has been judged to be possible using the judging unit; and an allocating unit that allocates one physical device connected to the terminal device concerned to the virtual computer which has been moved to the terminal device using the moving unit. | 09-30-2010 |
20100262969 | DATA PROCESSING SYSTEM AND METHOD FOR SCHEDULING THE USE OF AT LEAST ONE EXCLUSIVE RESOURCE - It is an object of the invention to improve the performance of a multitasking data processing system in which at least one exclusive resource is used for executing at least two task flows. The method according to the invention achieves this by using a so-called master schedule, which is used as a template to construct the schedules for individual task flows. The term master schedule refers to a set of reservations of the exclusive resources for task flows. | 10-14-2010 |
20100262970 | System and Method for Application Isolation - A system, method, and computer readable medium for providing application isolation to one or more applications and their associated resources. The system may include one or more isolated environments including application files and executables, and one or more interception layers intercepting access to system resources and interfaces. Further, the system may include an interception database maintaining mapping between the system resources inside the one or more isolated environments and outside, and a host operating system. The one or more applications may be isolated from other applications and the host operating system while running within the one or more isolated environments. | 10-14-2010 |
20100262971 | MULTI CORE SYSTEM, VEHICULAR ELECTRONIC CONTROL UNIT, AND TASK SWITCHING METHOD - A multi core system for allocating a task generated from a control system program to an appropriate CPU core and executing the task includes a trial-execution instructing part configured to cause a second CPU core to trial-execute a task which a first CPU core executes before the multi core system transfers the task from the first CPU core to the second CPU core and causes the second CPU core to execute the task, a determining part configured to determine whether an execution result by the first CPU core matches an execution result by the second CPU core, and an allocation fixing part configured to fix the second CPU core as the appropriate CPU core to which the task is allocated if the determining part determines that the execution result by the first CPU core matches the execution result by the second CPU core. | 10-14-2010 |
20100262972 | DEADLOCK AVOIDANCE - A transaction processing system is operated. A first resource is locked as a shared resource by a first task executing on a computing device. The first task attempts to lock a second resource as an exclusive resource. The occurrence of a deadlock is ascertained. A second task that wishes to use the locked first resource is identified. A current position of the first task with respect to the first resource is stored. The lock on the first resource is removed. The second task is prompted to use the first resource. The first task locks the first resource as the shared resource. The first task is repositioned with respect to first resource according to the stored position. The first task locks the second resource as the exclusive resource. The first task is performed. | 10-14-2010 |
20100262973 | Method For Operating a Multiprocessor Computer System - The invention relates to a method for operating a multiprocessor computer system which has at least two microprocessors ( | 10-14-2010 |
20100275212 | CONCURRENT DATA PROCESSING IN A DISTRIBUTED SYSTEM - Systems, methods, and computer media for scheduling vertices in a distributed data processing network and allocating computing resources on a processing node in a distributed data processing network are provided. Vertices, subparts of a data job including both data and computer code that runs on the data, are assigned by a job manager to a distributed cluster of process nodes for processing. The process nodes run the vertices and transmit computing resource usage information, including memory and processing core usage, back to the job manager. The job manager uses this information to estimate computing resource usage information for other vertices in the data job that are either still running or waiting to be run. Using the estimated computing resource usage information, each process node can run multiple vertices concurrently. | 10-28-2010 |
20100275213 | INFORMATION PROCESSING APPARATUS, PARALLEL PROCESS OPTIMIZATION METHOD - According to one embodiment, parallel processing optimization method for an apparatus configured to assign dynamically a part of some of basic modules, into which a program is divided and which comprise a execution rule which defines a executing order of the basic modules and are executable asynchronously with another modules, to threads includes identifiers based on the execution rule wherein the some of the basic modules are assignable to the threads, and configured to execute in parallel the threads by execution modules, the method includes managing the part of some of the basic modules and the identifiers of the threads assigned the part of some of the basic modules, managing an executable set includes the some of the basic modules, calculating transfer costs of the some of the basic modules when data, and selecting one of the basic module with a minimum transfer cost in the transfer costs. | 10-28-2010 |
20100275214 | DEVICE FOR SHARED MANAGEMENT OF A RESOURCE AMONG SEVERAL USERS - The device comprises a memory ( | 10-28-2010 |
20100281486 | Enhanced scheduling, priority handling and multiplexing method and system - System and method for enhancing scheduling/priority handling and multiplexing on transmitting data of different logical channels includes a receiver and a processor. The receiver receives a payload unit. The processor processes payload unit and enhances scheduling/priority handling and multiplex from different logical channels. The processor calculates data that can be transmitted with available resource for each logical channel, prioritizes the logical channels with decreasing priority order, performs first round resource allocation without partition, prioritizes logical channels with remaining data that is not performed with first round resource allocation with strict decreasing priority order, and performs second round resource allocation with partition. As such, scheduling/priority handling and the multiplexing in a multiple carrier system will be carried out so as to increase the efficiency of resource allocation. | 11-04-2010 |
20100281487 | SYSTEMS AND METHODS FOR MOBILITY SERVER ADMINISTRATION - An administration server of an administration service assigns attributes to objects by a plug-in of the administration service. The plug-in implements a method of a functionality set and the method is callable by the administration service to perform the assigning. Additionally or alternatively, the administration server triggers a reconciliation event by changing the assignment of an attribute of the users that comprise objects of plug-ins; determines a scope of the users and which objects are affected by changing the assignment; and reconciles conflicting assignments. Additionally or alternatively, the administration server adds tasks by the plug-ins to a job created by the plug-ins with the tasks performing the assigning; and removes tasks from the job to optimize it. | 11-04-2010 |
20100287560 | OPTIMIZING A DISTRIBUTION OF APPLICATIONS EXECUTING IN A MULTIPLE PLATFORM SYSTEM - Embodiments of the claimed subject matter are directed to methods and a system that allows the optimization of processes operating on a multi-platform system (such as a mainframe) by migrating certain processes operating on one platform to another platform in the system. In one embodiment, optimization is performed by evaluating the processes executing in a partition operating under a proprietary operating system, determining a collection of processes from the processes to be migrated, calculating a cost of migration for migrating the collection of processes, prioritizing the collection of processes in an order of migration and incrementally migrating the processes according to the order of migration to another partition in the mainframe executing a lower cost (e.g., open-source) operating system. | 11-11-2010 |
20100293549 | System to Improve Cluster Machine Processing and Associated Methods - A system to improve cluster machine processing that may include a plurality of interconnected computers that process data as one if necessary, and at least one other plurality of interconnected computers that process data as one if necessary. The system may also include a central manager to control what data processing is performed on a shared processing job performed by the plurality of interconnected computers and the at least one other plurality of interconnected computers. Each of the plurality of interconnected computers runs parallel jobs scheduled by a local backfill scheduler. In order to schedule a cluster spanning parallel job, the local schedulers cooperate on placement and timing of the cluster spanning job, using existing backfill rules in order not to disturb the local job streams. | 11-18-2010 |
20100293550 | SYSTEM AND METHOD PROVIDING FOR RESOURCE EXCLUSIVITY GUARANTEES IN A NETWORK OF MULTIFUNCTIONAL DEVICES WITH PREEMPTIVE SCHEDULING CAPABILITIES - A system and method for enabling automated task preemption, including a plurality of multifunctional devices having a plurality of functional capabilities; and a processing module configured to: (i) separate the tasks requiring the plurality of functional capabilities into the tasks requiring a first category of capabilities and the tasks requiring a second category of capabilities, where the tasks requiring the first category of capabilities has a higher processing priority than the tasks requiring the second category of capabilities; and (ii) selectively process the tasks requiring the first category of capabilities before the tasks requiring the second category of capabilities regardless of arrival times of the tasks requiring the plurality of capabilities; wherein the tasks requiring the second category of capabilities that are preempted by the tasks requiring the first category of capabilities are rescheduled to be completed within a predetermined time period of completion. | 11-18-2010 |
20100293551 | Job scheduling apparatus and job scheduling method - When allocating an unallocated queued job, by using a CDA having a mesh structure to which active jobs are allocated, a job scheduling apparatus scans an event list that includes information about allocation events and release events for jobs, determines the coordinates and the time at which submeshes corresponding to the queued jobs are reserved, and arranges the submeshes by overlapping them on the CDA. | 11-18-2010 |
20100299671 | VIRTUALIZED THREAD SCHEDULING FOR HARDWARE THREAD OPTIMIZATION - Embodiments are disclosed herein related to scheduling of virtualized runtime threads to hardware threads that share hardware resources to improve processing performance. For example, one embodiment provides a computing system that includes a scheduler to schedule execution of virtualized source code. The virtualized source code may include virtualized runtime threads that may be scheduled by the scheduler onto hardware threads that share hardware resources. The scheduler may include a decoder to catalogue hardware resource parameters used by the virtualized source code. Furthermore, the scheduler may include a virtualization engine to schedule execution of the virtualized runtime threads onto the hardware threads based the hardware resource parameters and a hardware-specific profile of the computing system. | 11-25-2010 |
20100299672 | MEMORY MANAGEMENT DEVICE, COMPUTER SYSTEM, AND MEMORY MANAGEMENT METHOD - A memory management device includes a memory area, an allocator generating unit that generates a plurality of allocators, which allocates a memory resource of the memory area to a task, for respective rules of allocation/deallocation of the memory resource, and a task correlating unit that selects one of generated allocators based on an allocator specification that is different for each task by the task and sets such that the task is capable of using selected allocator. | 11-25-2010 |
20100299673 | SHARED FILE SYSTEM CACHE IN A VIRTUAL MACHINE OR LPAR ENVIRONMENT - Computer system, method and program for defining first and second virtual machines and a memory shared by the first and second virtual machines. A filesystem cache resides in the shared memory. A lock structure resides in the shared memory to record which virtual machine, if any, currently has an exclusive lock for writing to the cache. The first virtual machine includes a first program function to acquire the exclusive lock when available by manipulation of the lock structure, and a second program function active after the first virtual machine acquires the exclusive lock, to write to the cache. The lock structure is directly accessible by the first program function. The cache is directly accessible by the second program function. The second virtual machine includes a third program function to acquire the exclusive lock when available by manipulation of the lock structure, and a fourth program function active after the second virtual machine acquires the exclusive lock, to write to the cache. The lock structure is directly accessible by the third program function. The cache is directly accessible by the fourth program function. Another computer system, method and program is embodied in logical partitions of a real computer, instead of virtual machines. | 11-25-2010 |
20100299674 | METHOD, SYSTEM, GATEWAY DEVICE AND AUTHENTICATION SERVER FOR ALLOCATING MULTI-SERVICE RESOURCES - In the field of network communications, a method, a system, a gateway device, and an authentication server for allocating multi-service resources while multiple services of a same user access to a network are provided. The method includes the following steps. A service request message sent by a first service terminal is received. Service capability and user identification of the first service terminal and a count of available resources that corresponds to the user identification are obtained. Resources are allocated for the first service terminal based on the service capability and the user identification of the first service terminal and the count of the available resources that corresponds to the user identification. Thus, the configuration of the gateway device is simplified, and the scale deployment for different services is achieved. | 11-25-2010 |
20100306780 | JOB ASSIGNING APPARATUS, AND CONTROL PROGRAM AND CONTROL METHOD FOR JOB ASSIGNING APPARATUS - A job assigning apparatus which is connected to a plurality of job processors and assigns the job to any of the job processors includes: an accepting section that accepts the job; an assigning section that selects a job processor having the least number of processes and assigns the accepted job to the selected job processor; a managing section that manages each of the job processors and the number of processes of the job assigned to each of the job processors by the assigning section in association with each other; an adding section that adds the number of processes of the jobs assigned by the assigning section to the number of processes managed by the managing section; and a notifying section that notifies another job assigning apparatus for assigning a job to a job processor of the number of processes of the job assigned by the assigning section. | 12-02-2010 |
20100318997 | ANNOTATING VIRTUAL APPLICATION PROCESSES - A virtualization system is described herein that facilitates communication between a virtualized application and a host operating system to allow the application to correctly access resources referenced by the application. When the operating system creates a virtualized application process, the virtualization system annotates a data structure associated with the process with an identifier that identifies the virtualized application environment associated with the process. When operating system components make requests on behalf of the originating virtual process, a virtualization driver checks the data structure associated with the process to determine that the helper process is doing work on behalf of the virtualized application process. Upon discovering that the thread is doing virtual process work, the virtualization driver directs the helper process's thread to the virtual application's resources, allowing the helper process to accomplish the requested work with the correct data. | 12-16-2010 |
20100318998 | System and Method for Out-of-Order Resource Allocation and Deallocation in a Threaded Machine - A system and method for managing the dynamic sharing of processor resources between threads in a multi-threaded processor are disclosed. Out-of-order allocation and deallocation may be employed to efficiently use the various resources of the processor. Each element of an allocate vector may indicate whether a corresponding resource is available for allocation. A search of the allocate vector may be performed to identify resources available for allocation. Upon allocation of a resource, a thread identifier associated with the thread to which the resource is allocated may be associated with the allocate vector entry corresponding to the allocated resource. Multiple instances of a particular resource type may be allocated or deallocated in a single processor execution cycle. Each element of a deallocate vector may indicate whether a corresponding resource is ready for deallocation. Examples of resources that may be dynamically shared between threads are reorder buffers, load buffers and store buffers. | 12-16-2010 |
20100318999 | PROGRAM PARTITIONING ACROSS CLIENT AND CLOUD - Partitioning execution of a program between a client device and a cloud of network resources, exploits the asymmetry between the computational and storage resources of the cloud and the resources and proximity of the client access device to a user. Programs may be decomposed into work units. Those work units may be profiled to determine execution characteristics, modeled based on current state information and the profile, and a model performance metric (MPM) generated. Based on the MPM, work units may be partitioned between the client and the cloud. | 12-16-2010 |
20100325636 | INTERFACE BETWEEN A RESOURCE MANAGER AND A SCHEDULER IN A PROCESS - An interface between a resource manager and schedulers in a process executing on a computer system allows the resource manager to manage the resources of the schedulers. The resource manager communicates with the schedulers using the interface to access statistical information from the schedulers. The statistical information describes the amount of use of the resources by the schedulers. The resource manager also communicates with the schedulers to dynamically allocate and reallocate resources among the schedulers in the same or different processes or computer systems in accordance with the statistical information. | 12-23-2010 |
20100325637 | ALLOCATION OF RESOURCES TO A SCHEDULER IN A PROCESS - A resource manager manages processing and other resources of schedulers of one or more processes executing on one or more computer systems. For each scheduler, the resource manager determines an initial allocation of resources based on the policy of the scheduler, the availability of resources, and the policies of other schedulers. The resource manager receives feedback from the schedulers and dynamically changes the allocation of resources of schedulers based on the feedback. The resource manager determines if changes improved the performance of schedulers and commits or rolls back the changes based on the determination. | 12-23-2010 |
20100325638 | INFORMATION PROCESSING APPARATUS, AND RESOURCE MANAGING METHOD AND PROGRAM - An information processing apparatus includes: a resource manager that allocates a resource in response to a codec processing request from an application, wherein the resource manager has first information indicating the relationship between codec processing functions and resources and second information indicating the availability of the resources, and the resource manager identifies resources having the codec processing function corresponding to the codec processing request from the application based on the first information, selects an idle resource from the identified resources based on the second information, and allocates the idle resource. | 12-23-2010 |
20100333103 | INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processor includes a management module that manages a plurality of register areas in a host controller for processing data protected by copyright. The register areas store confidential information for copyright protection. The management module includes a use state management module and a release module. The use state management module manages use state information on whether the register areas are used by existing process tasks. When all the register areas are occupied by the existing process tasks and a new process task requests for the use of a register area to perform a process based on the confidential information, the release module releases a register area occupied by one of the existing process tasks according to the use state information to assign the register area to the new process task. | 12-30-2010 |
20110004884 | Performance degradation based at least on computing application priority and in a relative manner that is known and predictable beforehand - A model is constructed to determine performance of each computing application based on allocation of resources (including at least one hardware resource) to the computing applications. How the allocation of the resources to the computing applications affects the performance is unknown beforehand. The resources are allocated to the computing applications based at least on the model. Where the resources are overloaded as allocated to the computing applications, performance degradation of each computing application is performed based at least on priorities of the computing applications relative to one another and on the model. Performance degradation reduces usage of the resources by the computing applications so that the resources are no longer overloaded. How the priorities of the computing applications affect the performance degradation in a relative manner to one another is known and predictable beforehand. | 01-06-2011 |
20110004885 | FEEDFORWARD CONTROL METHOD, SERVICE PROVISION QUALITY CONTROL DEVICE, SYSTEM, PROGRAM, AND RECORDING MEDIUM THEREFOR - An object of the present invention is to provide a feed forward type control method, a service provision quality control device, a system, a program and a recording medium which resolve problems of “lack of an evaluation function of a proper control plan”, “lack of a control-oriented evaluation function and a control-oriented execution function” and “lack of a correcting function and a verifying function of a control plan” | 01-06-2011 |
20110016471 | Balancing Resource Allocations Based on Priority - Balancing resource allocations based on priority may be provided. First, a plurality of repositories may be divided into at least two categories. Next, a first portion of computing resources may be dedicated to a first one of the at least two categories. Then a second portion of the computing resources may be dedicated to a second one of the at least two categories. A crawl may then be performed on the plurality of repositories with the computing resources. | 01-20-2011 |
20110016472 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - A job generation unit generates, from a source program, a job to be executed by any of a plurality of processing resources. The job generation unit calculates job characteristic information that allows estimation of an index value capable of indicating the amount of heat generated in the processing resources due to execution of the job, and appends the job characteristic information to the job. This makes it possible to estimate a temperature rise in a processing resource to which the job is allocated, by using a method that facilitates implementation in a system in which a scheduler allocates a job to a plurality of processing resources. | 01-20-2011 |
20110023045 | Targeted communication to resource consumers - A method of communicating to a consumer is disclosed. The consumer's usage of a resource is compared to a relevant cohort's usage of the resource. Based at least in part on a result of the comparison, a message is selected to be provided to the consumer. | 01-27-2011 |
20110023046 | MITIGATING RESOURCE USAGE DURING VIRTUAL STORAGE REPLICATION - Systems and methods of mitigating resource usage during virtual storage replication are disclosed. An exemplary method comprises detecting quality of a link between virtual storage libraries used for replicating data. The method also comprises determining a number of concurrent jobs needed to saturate the link. The method also comprises dynamically adjusting the number of concurrent jobs to saturate the link and thereby mitigate resource usage during virtual storage replication. | 01-27-2011 |
20110023047 | CORE SELECTION FOR APPLICATIONS RUNNING ON MULTIPROCESSOR SYSTEMS BASED ON CORE AND APPLICATION CHARACTERISTICS - Techniques for scheduling an application program running on a multiprocessor computer system are disclosed. Example methods include but are not limited to analyzing first, second, third, and fourth core components for any within-die process variation, determining an operating state of the first, second, third and fourth core components, selecting optimum core components for each component type with the aid of bloom filters, statically determining which core component types are used by the application program, and scheduling the application program to run on a core having an optimum core component for a core component type used by the application program. | 01-27-2011 |
20110029981 | SYSTEM AND METHOD TO UNIFORMLY MANAGE OPERATIONAL LIFE CYCLES AND SERVICE LEVELS - A system and a method to manage a data center, the method including, for example, retrieving a physical topology of a service; determining from the physical topology a concrete type of a resource for the service; and selecting an actual instance of the resource in the data center. The actual instance having the concrete type and the actual instance selected such that a consumption of the actual instance does not violate at least one of a constraint and a policy. | 02-03-2011 |
20110035753 | MECHANISM FOR CONTINUOUSLY AND UNOBTRUSIVELY VARYING STRESS ON A COMPUTER APPLICATION WHILE PROCESSING REAL USER WORKLOADS - A mechanism for varying stress on a software application while processing real user workloads is disclosed. A method of embodiments of the invention includes configuring application resources for a recovery configuration whose service levels are satisfactory. The application resources are associated with the software application. The method further includes configuring the application resources for stress configurations to affect service levels, and transitioning the application resources from the recovery configuration to a stress configuration for a time duration, while the application resources of the stress configuration are transitioned back to the recovery configuration. The method further includes determining a next stress configuration and a time duration combination to vary stress such that user service levels are unobtrusively affected by limiting the time duration in inverse relation to an uncertainty in predicting the service level impact of the stress configuration. | 02-10-2011 |
20110047553 | APPARATUS AND METHOD FOR INPUT/OUTPUT PROCESSING OF MULTI-THREAD - Provided sets a limit of execution threads which can be simultaneously processes in an input/output system and compares the number of threads which are being currently executed with the limit of execution threads at the time of requesting an input/output event from a thread and manages a job of processing the input/output event in accordance with the comparison result. The apparatus for asynchronous input/output processing of a multi-thread according to the present invention restricts the number of threads which are processed in the asynchronous input/output system as many as the limit of execution threads to prevent deterioration of performance caused due to context switching overhead of the thread and efficiently manage the thread. | 02-24-2011 |
20110055842 | VIRTUAL MULTIPLE INSTANCE EXTENDED FINITE STATE MACHINES WITH WAIT ROOMS AND/OR WAIT QUEUES - A method and apparatus for processing data by a pipeline of a virtual multiple instance extended finite state machine (VMI EFSM). An input token is selected to enter the pipeline. The input token includes a reference to an EFSM instance, an extended command, and an operation code. The EFSM instance requires the resource to be available to generate an output token from the input token. In response to receiving an indication that the resource is unavailable, the input token is sent to a wait room or an initiative token containing the reference and the operation code is sent to a wait queue, and the output token is not generated. Without stalling and restarting the pipeline, another input token is processed in the pipeline while the resource is unavailable and while the input token is in the wait room or the initiative token is in the wait queue. | 03-03-2011 |
20110055843 | Scheduling Jobs For Execution On A Computer System - A technique includes determining an order for projects to be performed on a computer system. Each project is associated with multiple job sets, such that any of the job sets may be executed on the computer system to perform the project. The technique includes selecting the projects in a sequence according to the determined order to progressively build a schedule of jobs for execution on the computer system. For each selected project, incorporating one of the associated job sets into the schedule based on a cost of each of the associated job sets. | 03-03-2011 |
20110055844 | HIGH DENSITY MULTI NODE COMPUTER WITH INTEGRATED SHARED RESOURCES - A multi-node computer system, comprising: a plurality of nodes, a system control unit and a carrier board. Each node of the plurality of nodes comprises a processor and a memory. The system control unit is responsible for: power management, cooling, workload provisioning, native storage servicing, and I/O. The carrier board comprises a system fabric and a plurality of electrical connections. The electrical connections provide the plurality of nodes with power, management controls, system connectivity between the system control unit and the plurality of nodes, and an external network connection to a user infrastructure. The system control unit and the carrier board provide integrated, shared resources for the plurality of nodes. The multi-node computer system is provided in a single enclosure. | 03-03-2011 |
20110061057 | Resource Optimization for Parallel Data Integration - For optimizing resources for a parallel data integration job, a job request is received, which specifies a parallel data integration job to deploy in a grid. Grid resource utilizations are predicted for hypothetical runs of the specified job on respective hypothetical grid resource configurations. This includes automatically predicting grid resource utilizations by a resource optimizer module responsive to a model based on a plurality of actual runs of previous jobs. A grid resource configuration is selected for running the parallel data integration job, which includes the optimizer module automatically selecting a grid resource configuration responsive to the predicted grid resource utilizations and an optimization criterion. | 03-10-2011 |
20110061058 | TASK SCHEDULING METHOD AND MULTI-CORE SYSTEM - A task scheduling method and multi-core system according to an embodiment of the present invention comprises: in scheduling for selecting a task that is set in an execution state with a microprocessor allocated thereto out of tasks in an executable state, it is determined whether at least one of the tasks in a young generation, for which the number of times of refill performed until a point of scheduling after transitioning from the execution state to a standby state according to release of the microprocessor is smaller than a predetermined number of times, is present and, when at least one of the tasks in the young generation is present, microprocessor is allocated to the task selected from at least one of the tasks of the young generation. | 03-10-2011 |
20110072436 | RESOURCE OPTIMIZATION FOR REAL-TIME TASK ASSIGNMENT IN MULTI-PROCESS ENVIRONMENTS - A novel and useful system and method of decentralized decision-making for real-time scheduling in a multi-process environment. For each process step and/or resource capable of processing a particular step, a service index is calculated. The calculation takes into account several measures, such as business level measures, operational measures and employee level measure. The decision of which process step a resource should next work on or what step to assign to a resource is based on the service index calculation and, optionally, other production factors. In one embodiment, the resource is assigned the process step with the maximal service index. Alternatively, when a resource becomes available, all process steps the resource is capable of processing are presented in order of descending service index. The resource then selects which process step to work on next. | 03-24-2011 |
20110072437 | COMPUTER JOB SCHEDULER WITH EFFICIENT NODE SELECTION - The present invention provides a method, program product, and information processing system that efficiently dispatches jobs from a job queue. The jobs are dispatched to the computational nodes in the system. First, for each job, the number of nodes required to perform the job and the required computational resources for each of these nodes are determined. Then, for each node required, a node is selected to determine whether a job scheduler has a record indicating if this node meets the required computational resource requirement. If no record exists, the job scheduler analyzes whether the node meets the computational resource requirements given that other jobs may be currently executing on that node. The result of this determination is recorded. If the node does meet the computational resource requirement, the node is assigned to the job. If the node does not meet the resource requirement, a next available node is selected. The method continues until all required nodes are assigned and the job is dispatched to the assigned nodes. Alternatively, if the number of required nodes is not available, it is indicated the job can not be run at this time. | 03-24-2011 |
20110072438 | FAST MAPPING TABLE REGISTER FILE ALLOCATION ALGORITHM FOR SIMT PROCESSORS - One embodiment of the present invention sets forth a technique for allocating register file entries included in a register file to a thread group. A request to allocate a number of register file entries to the thread group is received. A required number of mapping table entries included in a register file mapping table (RFMT) is determined based on the request, where each mapping table entry included in the RFMT is associated with a different plurality of register file entries included in the register file. The RFMT is parsed to locate an available mapping table entry in the RFMT for each of the required mapping table entries. For each available mapping table entry, a register file pointer is associated with an address that corresponds to a first register file entry in the plurality of register file entries associated with the available mapping table entry. | 03-24-2011 |
20110072439 | DECODING DEVICE, RECORDING MEDIUM, AND DECODING METHOD FOR CODED DATA - According to one embodiment, a decoding device includes a storage section, a control section, a decoding processing section. The storage section stores control information showing a progress state of process stages for a decoding process as to a plurality of processing data included in coded data. The control section allocates process stages corresponding to executable processing data which is executable in parallel, to a processor on the basis of the control information, a dependence relation between the processing data in the decoding process, and a dependence relation between the process stages. The decoding processing section parallelly executes allocated process stages corresponding to the executable processing data. | 03-24-2011 |
20110078695 | CHARGEBACK REDUCTION PLANNING FOR INFORMATION TECHNOLOGY MANAGEMENT - Reducing cost chargeback in an information technology (IT) computing environment including multiple resources, is provided. One implementation involves a process wherein resource usage and allocation statistics are stored for a multitude of resources and associated cost policies. Then, time-based usage patterns are determined for the resources from the statistics. A correlation of response time with resource usages and outstanding input/output instructions for the resources is determined. Based on usage patterns and the correlation, a multitude of potential cost reduction recommendations are determined. Further, a multitude of integrals are obtained based on the potential cost reduction recommendations, and a statistical integral is obtained based on the statistics. A difference between the statistical integral and each of the multiple integrals is obtained and compared with a threshold to determine potential final cost reduction recommendations. A final cost reduction recommendation is then selected from the potential cost reduction recommendations. | 03-31-2011 |
20110078696 | WORK QUEUE SELECTION ON A LOCAL PROCESSOR WITHIN A MULTIPLE PROCESSOR ARCHITECTURE - A method and system is disclosed for selecting a work queue associated with a processor within a multiple processor architecture to assign a new task. A local and a remote queue availability flag is maintained to indicate a relative size of work queues, in relationship to a mean queue size, for each processor in a multiple processor architecture. In determining to which processor to assign a task, the processor evaluates its own queue size by examining its local queue availability flag and evaluates other processor's queue sizes by examining their remote queue availability flags. The local queue availability flags are maintained asynchronously from task assignment. Remote flags are maintained at time of task assignment. The presented algorithm provides improved local processor queue size determinations in systems where task distribution processes execute with lower priorities that other tasks. | 03-31-2011 |
20110078697 | OPTIMAL DEALLOCATION OF INSTRUCTIONS FROM A UNIFIED PICK QUEUE - Systems and methods for efficient out-of-order dynamic deallocation of entries within a shared storage resource in a processor. A processor comprises a unified pick queue that includes an array configured to dynamically allocate any entry of a plurality of entries for a decoded and renamed instruction. This instruction may correspond to any available active threads supported by the processor. The processor includes circuitry configured to determine whether an instruction corresponding to an allocated entry of the plurality of entries is dependent on a speculative instruction and whether the instruction has a fixed instruction execution latency. In response to determining the instruction is not dependent on a speculative instruction, the instruction has a fixed instruction execution latency, and said latency has transpired, the circuitry may deallocate the instruction from the allocated entry. | 03-31-2011 |
20110078698 | METHOD FOR RECONCILING MAPPINGS IN DYNAMIC/EVOLVING WEB-ONTOLOGIES USING CHANGE HISTORY ONTOLOGY - The present invention is directed to reconciliation/reengineering of mappings in dynamic/evolving ontologies. Mappings are established among different ontologies for resolving the terminological and conceptual incompatibilities and support information exchange. As ontology evolves from one consistent state to another consistent state; this consequently makes the existing mappings of the domain ontology with other ontologies unreliable and staled, so mapping evolution is required. The present invention uses Change History Log of ontology changes to drastically reduce the time required for (re)establishing mappings among ontologies, achieving higher accuracy, and eliminating staleness in mappings. It is valid for more than two ontologies with local, centralized, and distributed Change History Log. | 03-31-2011 |
20110078699 | COMPUTER SYSTEM WITH DUAL OPERATING MODES - A system switches between non-secure and secure modes by making processes, applications, and data for the non-secure mode unavailable to the secure mode and vice versa. The process thread run queue is modified to include a state flag for each process that indicates whether the process is a secure or non-secure process. A process scheduler traverses the queue and only allocates time to processes that have a state flag that matches the current mode. Running processes are marked to be idled and are flagged as unrunnable, depending on the security mode, when the process reaches an intercept point. The scheduler is switched to allow only threads that have a flag that corresponding to the active security mode to be run. | 03-31-2011 |
20110083134 | APPARATUS AND METHOD FOR MANAGING VIRTUAL PROCESSING UNIT - A method and apparatus for managing a virtual processor including resources for operating application through a real central processing unit, which includes determining a utilization of a plurality of real CPUs to which a plurality of virtual processors are divided to be allocated; and repartitioning the virtual processors and reallocating the repartitioned virtual processor to at least part of the real CPUs, when the utilization of any one of the real CPUs is at a threshold or less. | 04-07-2011 |
20110088038 | Multicore Runtime Management Using Process Affinity Graphs - Technologies are generally described for runtime management of processes on multicore processing systems using process affinity graphs. Two or more processes may be determined to be related when the processes share interprocess messaging traffic. These related processes may be allocated to neighboring or nearby processor cores within a multicore processor using graph theory techniques as well as communication analysis techniques to evaluate interprocess communication needs. Process affinity graphs may be established to aid in determining grouping of processors and evaluating interprocess message traffic between groups of processes. The process affinity graphs may be based upon process affinity scores determined by monitoring and analyzing interprocess messaging traffic. Process affinity graphs may further inform splitting process affinity groups from one core onto two or more cores. | 04-14-2011 |
20110088039 | Power Monitoring and Control in Cloud Based Computer - According to another general aspect, a method for displaying the system resource usage of a computer may include identifying the number of open tabs in one or more tabbed based browsers running on the computer. The method may include determining the system resource usage of each tab. The method may further include displaying the system resource usage of each tab in a system resource meter. | 04-14-2011 |
20110088040 | Namespace Merger - In a virtualization environment, there is often a need for an application to access different resources (e.g., files, configuration settings, etc.) on a computer by name. The needed resources can potentially come from any one of a plurality of discrete namespaces or containers of resources on the computer. A resource name can identify one resource in one namespace and another resource in another namespace, and the namespaces may have different precedence relative to one another. The resources needed by the application can be accessed by enumerating names in a logical merger of the namespaces such that as new names in the logical merger are needed they are dynamically chosen from among the namespaces. When two resources in different namespaces have a same name, the resource in the higher precedence namespace can be chosen. | 04-14-2011 |
20110093860 | METHOD FOR MULTICLASS TASK ALLOCATION - Embodiments of the invention include a method of selection of server in a system including at least one dispatcher and several servers, in which system when a new task of a given class arrives, then the dispatcher assigns the task to one of these servers, characterized that the selection of the servers by the dispatcher is based on the MIPN (Multiclass Idle Period Notification) information, which is sent by the servers to the dispatcher. | 04-21-2011 |
20110093861 | Assigning A Portion Of Physical Computing Resources To A Logical Partition - A data processing system includes physical computing resources that include a plurality of processors. The plurality of processors include a first processor having a first processor type and a second processor having a second processor type that is different than the first processor type. The data processing system also includes a resource manager to assign portions of the physical computing resources to be used when executing logical partitions. The resource manager is configured to assign a first portion of the physical computing resources to a logical partition, to determine characteristics of the logical partition, the characteristics including a memory footprint characteristic, to assign a second portion of the physical computing resources based on the characteristics of the logical partition, and to dispatch the logical partition to execute using the second portion of the physical computing resources. | 04-21-2011 |
20110107343 | SYSTEM AND METHOD OF PROVIDING A FIXED TIME OFFSET BASED DEDICATED CO-ALLOCATION OF A COMMON RESOURCE SET - Disclosed are a system, method and computer-readable medium relating to managing resources within a compute environment having a group of nodes or computing devices. The method comprises, for each node in the compute environment: traversing a list jobs having a fixed time relationship, wherein for each job in the list, the following steps occur: obtaining a range list of available timeframes for each job, converting each availability timeframe to a start range, shifting the resulting start range in time by a job offset, for a first job, copying the resulting start range into a node range, and for all subsequent jobs, logically AND'ing the start range with the node range. Next, the method comprises logically OR'ing the node range with a global range, generating a list of acceptable resources on which to start and the timeframe at which to start and creating reservations according to the list of acceptable resources for the resources in the group of computing devices and associated job offsets. | 05-05-2011 |
20110113433 | RESOURCE ALLOCATION METHOD, IDENTIFICATION METHOD, BASE STATION, MOBILE STATION, AND PROGRAM - Provided is a technique capable of reporting resource block allocation information with no waste when an allocated resource block is reported, because in the current LTE downlink, the waste of the amount of resource allocation information increases in some cases since a restriction is imposed such that 37-bit fixed scheduling information is transmitted. A resource block group consisting of at least one or more resource blocks continuous on the frequency axis is allocated to a terminal, and the number of controlling signals for reporting allocation information indicating the allocated resource blocks is determined. | 05-12-2011 |
20110113434 | METHOD, SYSTEM, AND STORAGE MEDIUM FOR MANAGING COMPUTER PROCESSING FUNCTIONS - Exemplary embodiments include a system and storage medium for managing computer processing functions in a multi-processor computer environment. The system includes a physical processor, a standard logical processor, an assist logical processor sharing a same logical partition as the standard logical processor, and a single operating system instance associated with the logical partition, the single operating system instance including a switch-to service and a switch-from service. The system also includes a dispatch component managed by the single operating system instance. Upon invoking the switch-to service by standard code, the switch-to service checks to see if an assist logical processor is online and, if so, it updates an integrated assist field of a work element block associated with the task for indicating the task is eligible to be executed on the assist logical processor. The switch-to service also assigns a work queue to the work element block. | 05-12-2011 |
20110119675 | Concurrent Data Processing and Electronic Bookkeeping - Concurrent processing of business transaction data uses a time slice-centered scheme to cope with the situation where multiple requests demand a same resource at the same time. The method divides the processing time into multiple time slices, allocates each request to a corresponding time slice, and iteratively processing requests according to their corresponding time slices. The method does not require the requests to be processed one by one, and therefore does not cause a situation where other requests have to wait until the current request has been completely processed. Moreover, if a certain time slice has been allocated multiple requests of a same type, the requests are collectively processed as if they were a single request to reduce the frequencies of resource locking and unlocking, as well as the waiting time in a queue for resource access. | 05-19-2011 |
20110119676 | Resource File Localization - A system and method for localizing an application resource file. An application localizer may receive an application resource file containing text strings to be localized. The application localizer extracts each text string and sends it to a remote automated translation service, receiving a corresponding localized text string. The localizer writes each of the localized text strings to generate a localized application resource file. Configuration specifications may specify target locales, a format of the application resource file, or a format of application resource file names. | 05-19-2011 |
20110119677 | MULTIPROCESSOR SYSTEM, MULTIPROCESSOR CONTROL METHOD, AND MULTIPROCESSOR INTEGRATED CIRCUIT - In a multiprocessor system, in general, a processor assigned with a larger amount of tasks is apt to perform a larger amount of communication with other processors assigned with tasks, than a processor assigned with a smaller amount of tasks. | 05-19-2011 |
20110126207 | SYSTEM AND METHOD FOR PROVIDING ANNOTATED SERVICE BLUEPRINTS IN AN INTELLIGENT WORKLOAD MANAGEMENT SYSTEM - The system and method described herein for providing annotated service blueprints in an intelligent workload management system may include a computing environment having a model-driven, service-oriented architecture for creating collaborative threads to manage workloads. In particular, the management threads may converge information for creating annotated service blueprints to provision and manage tessellated services distributed within an information technology infrastructure. For example, in response to a request to provision a service, a service blueprint describing one or more virtual machines may be created. The service blueprint may then be annotated to apply various parameters to the virtual machines, and the annotated service blueprint may then be instantiated to orchestrate the virtual machines with the one or more parameters and deploy the orchestrated virtual machines on information technology resources allocated to host the requested service, thereby provisioning the requested service. | 05-26-2011 |
20110126208 | Processing Architecture Having Passive Threads and Active Semaphores - Multiple parallel passive threads of instructions coordinate access to shared resources using “active” semaphores. The semaphores are referred to as active because the semaphores send messages to execution and/or control circuitry to cause the state of a thread to change. A thread can be placed in an inactive state by a thread scheduler in response to an unresolved dependency, which can be indicated by a semaphore. A thread state variable corresponding to the dependency is used to indicate that the thread is in inactive mode. When the dependency is resolved a message is passed to control circuitry causing the dependency variable to be cleared. In response to the cleared dependency variable the thread is placed in an active state. Execution can proceed on the threads in the active state. | 05-26-2011 |
20110131582 | RESOURCE MANAGEMENT FINITE STATE MACHINE FOR HANDLING RESOURCE MANAGEMENT TASKS SEPARATE FROM A PROTOCOL FINITE STATE MACHINE - A method and logic circuit for a resource management finite state machine (RM FSM) managing resource(s) required by a protocol FSM. After receiving a resource request vector, the RM FSM determines not all of the required resource(s) are available. The protocol FSM transitions to a new state, generates an output vector, and loads the output vector into an output register. The RM FSM transitions to a state indicating that not all the resources are available and freezes an input register. In a subsequent cycle, the RM FSM freezes the output register and a current state register, and forces the output vector to be seen by the FSM environment as a null token. After determining that the required resource(s) are available, the RM FSM transitions to another state indicating that the resources are available, enables the output vector to be seen by the FSM environment, and unfreezes the protocol FSM. | 06-02-2011 |
20110131583 | MULTICORE PROCESSOR SYSTEM - A multicore processor system includes one or more client carrying out parallel processing of tasks by means of processor cores and a server assisting the client to carry out the parallel processing via a communication network. Task information containing the minimum number of required cores indicating the number of processor cores required to carry out processes of the tasks and core information containing operation setup information indicating operation setup content of the processor cores are stored in the server. The server determines whether the task is allocated to the plurality of processor cores or not in accordance with the task information and the core information. The server updates the core information in accordance with a determination result to transmit the updated core information to the client. The client carries out the parallel processing by means of the processor cores in accordance with the received core information. | 06-02-2011 |
20110131584 | THE METHOD AND APPARATUS FOR THE RESOURCE SHARING BETWEEN USER DEVICES IN COMPUTER NETWORK - To solve the problems in prior art, the present invention has provided a new scheme for resource sharing between user devices, which shall be easily used, implemented and extended. Also, the present invention aims to decrease the user input for resource sharing. In particular, the present invention is based on IM protocol, the sharing initiator, and its cooperators concerning the key information of the to-be-consigned tasks via IM messages. Then, preferably, the initiator chooses one or more cooperators for each of the to-be-consigned tasks after further communication with the cooperators. For each task, the chosen cooperators will be referred to as its nominated cooperators. At last, each of the cooperators will handle the task(s) consigned to it, if any, and sends the result back to the initiator. | 06-02-2011 |
20110138393 | Thread Allocation and Clock Cycle Adjustment in an Interleaved Multi-Threaded Processor - Methods, apparatuses, and computer-readable storage media are disclosed for reducing power by reducing hardware-thread toggling in a multi-threaded processor. In a particular embodiment, a method allocates software threads to hardware threads. A number of software threads to be allocated is identified. It is determined when the number of software threads is less than a number of hardware threads. When the number of software threads is less than the number of hardware threads, at least two of the software threads are allocated to non-sequential hardware threads. A clock signal to be applied to the hardware threads is adjusted responsive to the non-sequential hardware threads allocated. | 06-09-2011 |
20110138394 | Service Oriented Collaboration - When a service is requested at a platform in a collaborative services environment, a service orchestration engine accesses a service definition from a repository and schedules a number of tasks at a number of end points in accordance with a number of end point profiles and a number of policies associated with the end points. | 06-09-2011 |
20110145829 | PERFORMANCE COUNTER INHERITANCE - A system for providing performance counter inheritance includes an operating system that receives a request of a first application to monitor performance of a second application, the request identifying an event to monitor during the execution of a task associated with the second application. The operating system causes a task counter corresponding to the event to be activated, and automatically activates a child task counter for each child task upon receiving a notification that execution of a corresponding child task is starting. Further, the operating system adds a value of each child task counter to a value of the task counter to determine a total counter value for the task, and provides the total counter value of the task to the first application. | 06-16-2011 |
20110145830 | JOB ASSIGNMENT APPARATUS, JOB ASSIGNMENT PROGRAM, AND JOB ASSIGNMENT METHOD - A job assignment apparatus includes: a correlation calculation unit to calculate a correlation between an execution time used for processing a program that depends on a computer resource operating at the start of an execution request job and an execution time used for processing a predetermined amount of data in the execution request job which operates immediately after completion of an operation of the program; a resource identification unit to identify the computer resource on which the execution request job depends on the basis of the correlation calculated by the correlation calculation unit; and a job assignment unit to assign the execution request job to one of execution servers connected to the job assignment apparatus so as to exclude simultaneous execution of a job that depends on the same computer resource as the computer resource identified by the resource identification unit and the execution request job. | 06-16-2011 |
20110145831 | MULTI-PROCESSOR SYSTEM, MANAGEMENT APPARATUS FOR MULTI-PROCESSOR SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM IN OR ON WHICH MULTI-PROCESSOR SYSTEM MANAGEMENT PROGRAM IS RECORDED - The invention achieves optimization of partition division by implementing resource distribution with a characteristic of a system into consideration so that the processing performance of the entire system is enhanced. To this end, a system management section in the invention calculates an optimum distribution of a plurality of resources to partitions based on distance information regarding the distance between a plurality of resources and data movement frequencies between the plural resources. The plural resources are distributed to the plural partitions through a plurality of partition management sections so that the optimum distribution state may be established. | 06-16-2011 |
20110145832 | TECHNIQUES FOR ALLOCATING COMPUTING RESOURCES TO APPLICATIONS IN AN EMBEDDED SYSTEM - Techniques for allocating computing resources to tasks include receiving first data and second data. The first data indicates a limit for unblocked execution by a processor of a set of at least one task that includes instructions for the processor. The second data indicates a maximum use of the processor by the set. It is determined whether a particular set of at least one task has exceeded the limit for unblocked execution based on the first data. If it is determined that the particular set has exceeded the limit, then execution of the particular set by the processor is blocked for a yield time interval based on the second data. These techniques can guarantee that no time-critical tasks of an embedded system on a specific-purpose device are starved for processor time by tasks of foreign applications also executed by the processor. | 06-16-2011 |
20110154348 | METHOD OF EXPLOITING SPARE PROCESSORS TO REDUCE ENERGY CONSUMPTION - A method, system, and computer program product for reducing power and energy consumption in a server system with multiple processor cores is disclosed. The system may include an operating system for scheduling user workloads among a processor pool. The processor pool may include active licensed processor cores and inactive unlicensed processor cores. The method and computer program product may reduce power and energy consumption by including steps and sets of instructions activating spare cores and adjusting the operating frequency of processor cores, including the newly activated spare cores to provide equivalent computing resources as the original licensed cores operating at a specified clock frequency. | 06-23-2011 |
20110154349 | RESOURCE FAULT MANAGEMENT FOR PARTITIONS - In accordance with at least some embodiments, a system includes a plurality of partitions, each partition having its own operating system (OS) and workload. The system also includes a plurality of resources assignable to the plurality of partitions. The system also includes management logic coupled to the plurality of partitions and the plurality of resources. The management logic is configured to set priority rules for each of the plurality of partitions based on user input. The management logic performs automated resource fault management for the resources assigned to the plurality of partitions based on the priority rules. | 06-23-2011 |
20110154350 | AUTOMATED CLOUD WORKLOAD MANAGEMENT IN A MAP-REDUCE ENVIRONMENT - A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device. | 06-23-2011 |
20110154351 | Tunable Error Resilience Computing - An attribute of a descriptor associated with a task informs a runtime environment of which instructions a processor is to run to schedule a plurality of resources for completion of the task in accordance with a level of quality of service in a service level agreement. | 06-23-2011 |
20110154352 | MEMORY MANAGEMENT SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT - According to one aspect of the present disclosure a method and technique for managing memory access is disclosed. The method includes setting a memory databus utilization threshold for each of a plurality of processors of a data processing system to maintain memory databus utilization of the data processing system at or below a system threshold. The method also includes monitoring memory databus utilization for the plurality of processors and, in response to determining that memory databus utilization for at least one of the processors is below its threshold, reallocating at least a portion of unused databus utilization from the at least one processor to at least one of the other processors. | 06-23-2011 |
20110154353 | Demand-Driven Workload Scheduling Optimization on Shared Computing Resources - Systems and methods implementing a demand-driven workload scheduling optimization of shared resources used to execute tasks submitted to a computer system are disclosed. Some embodiments include a method for demand-driven computer system resource optimization that includes receiving a request to execute a task (said request including the task's required execution time and resource requirements), selecting a prospective execution schedule meeting the required execution time and a computer system resource meeting the resource requirement, determining (in response to the request) a task execution price for using the computer system resource according to the prospective execution schedule, and scheduling the task to execute using the computer system resource according to the prospective execution schedule if the price is accepted. The price varies as a function of availability of the computer system resource at times corresponding to the prospective execution schedule, said availability being measured at the time the price is determined. | 06-23-2011 |
20110154354 | METHOD AND PROGRAM FOR RECORDING OBJECT ALLOCATION SITE - A method, system, and program for recording an object allocation site. In the structure of an object, a pointer to a class of an object is replaced by a pointer to an allocation site descriptor which is unique to each object allocation site, a common allocation site descriptor is used for objects created at the same allocation site, and the class of the object is accessed through the allocation site descriptor. | 06-23-2011 |
20110154355 | METHOD AND SYSTEM FOR RESOURCE ALLOCATION FOR THE ELECTRONIC PREPROCESSING OF DIGITAL MEDICAL IMAGE DATA - A method and a system, for resource allocation provided for implementation of the method, are specified for the electronic preprocessing of digital medical image data. In at least one embodiment, provision is subsequently made to classify a plurality of preprocessing jobs, in particular by way of a classifier module, to determine whether they were generated interactively by a user request or automatically. Each preprocessing job is placed in a queue in accordance with the classification, in particular by way of an execution coordination module of the system. Data processing resources for job execution are assigned to each preprocessing job taking account of the classification, in particular by way of a resource allocation module of the system, with interactive preprocessing jobs being handled with higher priority than automatic preprocessing orders. | 06-23-2011 |
20110161972 | GOAL ORIENTED PERFORMANCE MANAGEMENT OF WORKLOAD UTILIZING ACCELERATORS - A method, information processing system, and computer readable storage medium are provided for dynamically managing accelerator resources. A first set of hardware accelerator resources is initially assigned to a first information processing system, and a second set of hardware accelerator resources is initially assigned to a second information processing system. Jobs running on the first and second information processing systems are monitored. When one of the jobs fails to satisfy a goal, at least one hardware accelerator resource in the second set of hardware accelerator resources from the second information processing system are dynamically reassigned to the first information processing system. | 06-30-2011 |
20110161973 | ADAPTIVE RESOURCE MANAGEMENT - Allocation of resources across multiple consumers allows efficient utilization of shared resources. Observed usages of resources by consumers over time intervals are used to determine a total throughput of resources by the consumers. The total throughput of resources is used to determine allocation of resources for a subsequent time interval. The consumers are associated with priorities used to determine their allocations. Minimum and maximum resource guarantees may be associated with consumers. The resource allocation aims to allocate resources based on the priorities of the consumers while aiming to avoid starvation by any consumer. The resource allocation allows efficient usage of network resources in a database storage system storing multiple virtual databases. | 06-30-2011 |
20110161974 | Methods and Apparatus for Parallelizing Heterogeneous Network Communication in Smart Devices - The present disclosure relates to devices, implementations and techniques for task scheduling. Specifically, task scheduling in an electronic device that has a multi-processing environment and support network interface devices. | 06-30-2011 |
20110161975 | REDUCING CROSS QUEUE SYNCHRONIZATION ON SYSTEMS WITH LOW MEMORY LATENCY ACROSS DISTRIBUTED PROCESSING NODES - A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ. | 06-30-2011 |
20110161976 | METHOD TO REDUCE QUEUE SYNCHRONIZATION OF MULTIPLE WORK ITEMS IN A SYSTEM WITH HIGH MEMORY LATENCY BETWEEN PROCESSING NODES - A method efficiently dispatches/completes a work element within a multi-node, data processing system that has a global command queue (GCQ) and at least one high latency node. The method comprises: at the high latency processor node, work scheduling logic establishing a local command/work queue (LCQ) in which multiple work items for execution by local processing units can be staged prior to execution; a first local processing unit retrieving via a work request a larger chunk size of work than can be completed in a normal work completion/execution cycle by the local processing unit; storing the larger chunk size of work retrieved in a local command/work queue (LCQ); enabling the first local processing unit to locally schedule and complete portions of the work stored within the LCQ; and transmitting a next work request to the GCQ only when all the work within the LCQ has been dispatched by the local processing units. | 06-30-2011 |
20110161977 | METHOD AND DEVICE FOR DATA PROCESSING - Designing a coupling of a traditional processor, in particular a sequential processor, and a reconfigurable field of data processing units, in particular a runtime-reconfigurable field of data processing units is described. | 06-30-2011 |
20110161978 | JOB ALLOCATION METHOD AND APPARATUS FOR A MULTI-CORE SYSTEM - A method and apparatus for efficiently allocating jobs to processing cores included in a computing system, are provided. The multi-core system includes a plurality of cores that may collect performance information of each respective core while the cores are executing a requested task in parallel. The multi-core system allocates additional jobs of the requested task to the cores based on the performance information and the amount of jobs remaining. | 06-30-2011 |
20110173628 | SYSTEM AND METHOD OF CONTROLLING POWER IN AN ELECTRONIC DEVICE - A method of utilizing a node power architecture (NPA) system, the method includes receiving a request to create a client, determining whether a resource is compatible with the request, and returning a client handle when the resource is compatible with the request. | 07-14-2011 |
20110179422 | Shared Resource Management - Systems, methods, apparatus, and computer program products are provided for monitoring and allocating shared resources. For example, in one embodiment, the status of resource dependent entities is continuously monitored to determine the current use of a shared resource. When a resource dependent entity requires use of the shared resource, a (a) request for use of the shared resource can be generated and (b) determination can be made as to whether any of the current allocations of the shared resource can be released for use by the resource dependent entity. | 07-21-2011 |
20110179423 | MANAGING LATENCIES IN A MULTIPROCESSOR INTERCONNECT - In a computing system having a plurality of transaction source nodes issuing transactions into a switching fabric, an underserviced node notifies source nodes in the system that it needs additional system bandwidth to timely complete an ongoing transaction. The notified nodes continue to process already started transactions to completion, but stop the introduction of new traffic into the fabric until such time as the underserviced node indicates that it has progressed to a preselected point. | 07-21-2011 |
20110185364 | EFFICIENT UTILIZATION OF IDLE RESOURCES IN A RESOURCE MANAGER - Embodiments are directed to dynamically allocating processing resources among a plurality of resource schedulers. A resource manager dynamically allocates resources to a first resource scheduler. The resource manager is configured to dynamically allocate resources among a plurality of resource schedulers, and each scheduler is configured to manage various processing resources. The resource manager determines that at least one of the processing resources dynamically allocated to the first resource scheduler is idle. The resource manager determines that at least one other resource scheduler needs additional processing resources and, based on the determination, loans the determined idle processing resource of the first resource scheduler to a second resource scheduler. | 07-28-2011 |
20110185365 | DATA PROCESSING SYSTEM, METHOD FOR PROCESSING DATA AND COMPUTER PROGRAM PRODUCT - A computer-implemented data processing system, computer-implemented method and computer program product for processing data. The system includes: a scheduler; a processor system; and at least one producer for executing a task. The scheduler is operable to allocate to the producer with respect to a scheme, a processing time of a processing resource of the processor system. The producer is operable to execute the task using said processor system during the allocated processing time. | 07-28-2011 |
20110191781 | RESOURCES MANAGEMENT IN DISTRIBUTED COMPUTING ENVIRONMENT - A method, system and a computer program product for determining resources allocation in a distributed computing environment. An embodiment may include identifying resources in a distributed computing environment, computing provisioning parameters, computing configuration parameters and quantifying service parameters in response to a set of service level agreements (SLA). The embodiment may further include iteratively computing a completion time required for completion of the assigned task and a cost. Embodiments may further include computing an optimal resources configuration and computing at least one of an optimal completion time and an optimal cost corresponding to the optimal resources configuration. Embodiments may further include dynamically modifying the optimal resources configuration in response to at least one change in at least one of provisioning parameters, computing parameters and quantifying service parameters. | 08-04-2011 |
20110191782 | APPARATUS AND METHOD FOR PROCESSING DATA - A data processing apparatus and method for allocating data to processors, allowing the processors to process the data efficiently. The data processing apparatus may predict a result of processing data, based on a workload for the data, according to a number of processors, and may determine the number of processors to be allocated with the data, using the predicted processing result. | 08-04-2011 |
20110197196 | DYNAMIC JOB RELOCATION IN A HIGH PERFORMANCE COMPUTING SYSTEM - A method and apparatus is described for dynamic relocation of a job executing on multiple nodes of a high performance computing (HPC) systems. The job is dynamically relocated when the messaging network is in a quiescent state. The messaging network is quiesced by signaling the job to suspend execution at a global collective operation of the job where the messaging of the job is known to be in a quiescent state. When all the nodes have reached the global collective operation and paused, the job is relocated and execution is resumed at the new location. | 08-11-2011 |
20110197197 | WIDGET FRAMEWORK, REAL-TIME SERVICE ORCHESTRATION, AND REAL-TIME RESOURCE AGGREGATION - A method to optimize calls to a service by components of an application running on an application server is provided. The method includes receiving a first call and a second call, the first call made to a service by a first one of a plurality of components included in the application, and the second call made to the service by a second one of the plurality of components; selecting one of a plurality of optimizations, the plurality of optimizations including orchestrating the first call and the second call into a third call to the service; and, in response to the selecting of the orchestrating of the first call and the second call into the third call as the one of the plurality of optimizations, orchestrating the first call and the second call into the third call. | 08-11-2011 |
20110202925 | OPTIMIZED CAPACITY PLANNING - A computer implemented method, system and/or program product determine capacity planning of resources allocation for an application scheduled to execute on a virtual machine from a set of multiple applications by computing a mean associated with a pool of pre-defined resources utilization over a time interval; computing a variance associated with the pool of pre-defined resources utilization over the same time interval; identifying a set of resource to execute the scheduled application from the pool of pre-defined resources, wherein the pool of pre-defined resources is created from a pre-defined Service Level Agreement (SLA); and allocating a set of fixed resources from the pool of pre-defined resources to execute the application based on the mean resource utilization. | 08-18-2011 |
20110202926 | Computer System Performance by Applying Rate Limits to Control Block Tenancy - Embodiments of the invention are provided to enable fair and balanced allocation of control blocks to support processing of requests received from a client machine. The server is configured with tools to manage an account balance of control block availability for each service class. The account balance is periodically adjusted based upon usage, tenancy, deficits, and passage of time. Processing of one or more tasks in a service class is support when the credit value in the service class account is equal to or greater than the entry cost estimated for the request. | 08-18-2011 |
20110202927 | Apparatus, Method and System for Aggregating Computing Resources - A system for executing applications designed to run on a single SMP computer on an easily scalable network of computers, while providing each application with computing resources, including processing power, memory and others that exceed the resources available on any single computer. A server agent program, a grid switch apparatus and a grid controller apparatus are included. Methods for creating processes and resources, and for accessing resources transparently across multiple servers are also provided. | 08-18-2011 |
20110202928 | RESOURCE MANAGEMENT METHOD AND EMBEDDED DEVICE - Provided is a resource management method in a system which individually limits a resource amount used by a software module ( | 08-18-2011 |
20110209156 | METHODS AND APPARATUS RELATED TO MIGRATION OF CUSTOMER RESOURCES TO VIRTUAL RESOURCES WITHIN A DATA CENTER ENVIRONMENT - In one embodiment, a processor-readable medium can be configured to store code representing instructions to be executed by a processor. The code can include code to receive an indicator that a set of virtual resources has been identified for quarantine at a portion of a data center. The code can also include code to execute, during at least a portion of a quarantine time period, at least a portion of a virtual resource from the set of virtual resources at a quarantined portion of hardware of the data center that is dedicated to execute the set of virtual resources in response to the indicator and not execute virtual resources associated with non-quarantine operation. | 08-25-2011 |
20110209157 | RESOURCE ALLOCATION METHOD, PROGRAM, AND RESOURCE ALLOCATION APPARATUS - A resource allocation apparatus according to the present invention includes a system information acquisition unit configured to acquire program congestion pattern information indicating a group of programs executed concurrently on a system; and a resource allocation pattern determination unit configured to generate a plurality of resource allocation patterns for allocating the resource to a plurality of programs included in the group of programs indicated in the program congestion pattern information, and to calculate the total of amount of processing needed to execute the programs when the resource is allocated to the programs included in the group of programs by the generated resource allocation patterns, then to determine an optimal resource allocation pattern among the generated resource allocation patterns as a resource allocation pattern for the programs included in the group of programs based on the calculated total amount of processing. | 08-25-2011 |
20110214129 | MANAGEMENT OF MULTIPLE RESOURCE PROVIDERS - A device receives a request for an amount of a resource. It determines for each resource provider in a set of resource providers a current load, a requested load corresponding to the requested amount of the resource, and an additional load corresponding to an expected state of an application. It determines for each of the resource providers an expected total load on the basis of the current load, the requested load, and the additional load. It subsequently selects from the set of resource providers a preferred resource provider on the basis of the expected total loads. The resource may be one of the following: memory, processing time, data throughput, power, and usage of a device. | 09-01-2011 |
20110219381 | MULTIPROCESSOR SYSTEM WITH MULTIPLE CONCURRENT MODES OF EXECUTION - A multiprocessor system supports multiple concurrent modes of speculative execution. Speculation identification numbers (IDs) are allocated to speculative threads from a pool of available numbers. The pool is divided into domains, with each domain being assigned to a mode of speculation. Modes of speculation include TM, TLS, and rollback. Allocation of the IDs is carried out with respect to a central state table and using hardware pointers. The IDs are used for writing different versions of speculative results in different ways of a set in a cache memory. | 09-08-2011 |
20110219382 | METHOD, SYSTEM, AND APPARATUS FOR TASK ALLOCATION OF MULTI-CORE PROCESSOR - A system for task allocation of a multi-core processor is provided. The system includes a task allocator and a plurality of sub-processing systems. Each of the sub-processing systems comprises a state register, a processor core, and a buffer, the state register is configured to recognize state of the sub-processing systems, and transmit state information of the sub-processing systems to the task allocator, the state information comprises: a first state bit configured to indicate whether sub-processing systems are in Idle state; and a second state bit configured to indicate a specific state of the sub-processing systems. The task allocator is configured to allocate task to the sub-processing systems according to a priority determined by the state information sent by the state registers of the sub-processing systems. | 09-08-2011 |
20110225592 | Contention Analysis in Multi-Threaded Software - A contention log contains data for contentions that occur during execution of a multi-threaded application, such as a timestamp of the contention, contention length, contending thread identity, contending thread call stack, and contended-for resource identity. After execution of the application ends, contention analysis data generated from the contention log shows developers information such as total number of contentions for particular resource(s), total number of contentions encountered by thread(s), a list of resources that were most contended for, a list of threads that were most contending, a plot of the number of contentions per time interval during execution of the application, and so on. A developer may pivot between details about threads and details about resources to explore relationships between thread(s) and resource(s) involved in contention(s). Other information may also be displayed, such as call stacks, program source code, and process thread ownership, for example. | 09-15-2011 |
20110225593 | INTERFACE-BASED ENVIRONMENTALLY SUSTAINABLE COMPUTING - Implementation of interface-based environmentally sustainable computing is provided. A method includes retrieving usage characteristics of a process scheduled to execute on a computer system and determining an environmental impact of the process on the computer system by mapping the usage characteristics of the process to corresponding environmental costs of the usage characteristics. The method also includes implementing an action on the computer system in response to the environmental impact. The actions are pre-configured for administration based upon a threshold level of environmental impact associated with the process and/or user selection. | 09-15-2011 |
20110231857 | CACHE PERFORMANCE PREDICTION AND SCHEDULING ON COMMODITY PROCESSORS WITH SHARED CACHES - A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts. | 09-22-2011 |
20110231858 | BURST ACCESS PROTOCOL - Methods and systems provide a burst access protocol that enables efficient transfer of data between a first and a second processor via a data interface whose access set up time could present a communication bottleneck. Data, indices, and/or instructions are transmitted in a static table from the first processor and stored in memory accessible to the second processor. Later, the first processor transmit to the second processor a dynamic table which specifies particular data, indices and/or instructions within the static table that are to be implemented by the second processor. The second processor uses the dynamic table to implement the identified particular subset of data, indices and/or instructions. By transmitting the bulk of data, indices and/or instructions to the second processor in a large static table, the burst access protocol enables efficient use of data interfaces which can transmit large amounts of information, but require relatively long access setup times. | 09-22-2011 |
20110231859 | PROCESS ASSIGNING DEVICE, PROCESS ASSIGNING METHOD, AND COMPUTER PROGRAM - A process assigning device includes executing an operation including receiving an assignment request including device identification information, content and process identification information, determining whether identification of another device exists on the basis of the content identification information indicated by the received assignment request, storing the device identification information included in the assignment request in association with the content identification information and the process identification information, and the process identification information in association with the device identification information when determining that the identification of the other device does not exist. When the processor determines the identification information of the other device exists, the processor causes the device identification information included in the assignment request, and the assigned part information indicating the part that is included in the content data and that varies by device identification information to be stored. | 09-22-2011 |
20110239222 | SYSTEM AND METHOD OF DETERMINING APPLICABLE INSTALLATION INFORMATION OF APPARATUS - A computer and method obtains user input from an input device to determine applicable installation information of an apparatus according to resource consumption of the apparatus. The computer and method are capable of obtaining resource consumption information of an apparatus according to the installation material from an input device and operable to perform transformation processing to obtain installation data according to resource consumption information of the apparatus. Differences between the installation data and standard specifications are calculated and the specified standard specification corresponding to a difference which is the smallest number in the differences is found. The specified standard specification is outputted. | 09-29-2011 |
20110239223 | COMPUTATION RESOURCE CONTROL APPARATUS, COMPUTATION RESOURCE CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM - A computation resource control apparatus includes an activation unit, a first queue managing unit, an allocating unit and a second queue managing unit. The activation unit activates a computation resource being in a stop state in accordance with a computation request. The first queue managing unit adds the computation resource which is being activated to a first queue. The allocating unit allocates the computation resource, which is output from the first queue, to the computation request to execute a computation process corresponding to the computation request. The second queue managing unit adds the computation resource which has completed the computation process to a second queue and places the computation resource, which is output from the second queue, in the stop state. | 09-29-2011 |
20110239224 | CALCULATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF - A calculation processing apparatus, which executes calculation processing based on a network composed by hierarchically connecting a plurality of processing nodes, assigns a partial area of a memory to each of the plurality of processing nodes, stores a calculation result of a processing node in a storable area of the partial area assigned to that processing node, and sets, as storable areas, areas that store the calculation results whose reference by all processing nodes connected to the subsequent stage of that processing node is complete. The apparatus determines, based on the storage states of calculation results in partial areas of the memory assigned to the processing node designated to execute the calculation processing of the processing nodes, and to processing nodes connected to the previous stage of the designated processing node, whether or not to execute a calculation of the designated processing node. | 09-29-2011 |
20110247000 | Mechanism for Tracking Memory Accesses in a Non-Uniform Memory Access (NUMA) System to Optimize Processor Task Placement - A mechanism for tracking memory accesses in a non-uniform memory access (NUMA) system to optimize processor task placement is disclosed. A method of embodiments of the invention includes creating a page table (PT) hierarchy associated with a thread to be run on a processor of a computing device, collecting access bit information from the PT hierarchy associated with the thread, wherein the access bit information includes any access bits in the PT hierarchy that are set by a memory management unit (MMU) of the processor to identify a page of memory accessed by the thread, determining memory access statistics for the thread, and utilizing the memory access statistics for the thread in a determination of whether to migrate the thread to another processor. | 10-06-2011 |
20110247001 | Resource Management In Computing Scenarios - This patent application pertains to urgency-based resource management in computing scenarios. One implementation can identify processes competing for resources on a system. The implementation can evaluate an urgency of individual competing processes. The implementation can also objectively allocate the resources among the competing processes in a manner that reduces a total of the urgencies of the competing processes. | 10-06-2011 |
20110247002 | Dynamic System Scheduling - Resources of a partitionable computer system are partitioned into: (i) a first partition for first jobs, the first jobs being at least one of small and short running; and (ii) a second partition for second jobs, the second jobs being at least one of large and long running. The computer system is run as partitioned in the partitioning step and the partitioning is periodically re-evaluated against at least one threshold for at least one of the partitions. If the periodic re-evaluation suggests that one of the first and second partitions is underutilized, the resources of the partitionable computer system are dynamically re-partitioned to reassign at least some of the resources of the partitionable computer system from the underutilized one of the first and second partitions to another one of the first and second partitions | 10-06-2011 |
20110247003 | Predictive Dynamic System Scheduling - Resources of a partitionable computer system are partitioned into at least first and second partitions, in accordance with a first or second mode of operation of the partitionable computer system. The system is run in the first or second mode, partitioned in accordance with the partitioning step. Periodically, it is determined whether the computer system should be switched from one mode to the other mode. If so, the computer system is run in the other mode, partitioned in accordance with the other mode. The first and second modes of operation are defined in accordance with historical observations of the partitionable computer system. The periodic determination is carried out based on predictions in accordance with the historical observations. | 10-06-2011 |
20110247004 | Information Processing Apparatus - According to one embodiment, an information processing apparatus is provided. The information processing apparatus which performs a signaling process with an external apparatus through a network and a multimedia process of data, includes: first and second CPU cores each including one or more CPU cores; a first controller configured to allocate one of the signaling process and the multimedia process to the first CPU core, and the other of the signaling process and the multimedia process to the second CPU core; and a second controller configured to allocate a process which is different from the multimedia process and the signaling process to one of the first and second CPU cores, according to process states of the first and second CPU cores. | 10-06-2011 |
20110258633 | Information processing system and use right collective management method - Disclosed is an information processing system including plural information processing apparatuses that have respective hardware resources including hardware resources to be licensed, each information processing apparatus performing information processing using the licensed hardware resources in which use rights are allocated; and a management apparatus that is connected to the plural information processing apparatuses and manages the hardware resources of the plural information processing apparatuses. The management apparatus includes a use right information holding unit that holds use right information corresponding to the use rights of the hardware resources, and a use right allocation unit that allocates the use rights to the hardware resources on a hardware resource basis in accordance with the held use right information. | 10-20-2011 |
20110265092 | PARALLEL COMPUTER SYSTEM, JOB SERVER, JOB SCHEDULING METHOD AND JOB SCHEDULING PROGRAM - A parallel computer system comprising a node group having numbers of nodes connected by a network, in which a job scheduler of a job server that schedules jobs to be executed by a node of the node group comprises a temperature calculating unit which with a node being used of the node group as an imaginary heat source and with the assumption that a quantity of heat is conducted from the heat source to a surrounding node, calculates a temperature of a surrounding free node based on a distance from the heat source, a free region extracting unit which selects, from a plurality of temperature groups obtained by grouping free nodes on a certain temperature range basis, a temperature group meeting the number of free nodes required by a job according to a temperature and takes out a lowest temperature free node from the selected temperature group as a center node, and a node selecting unit which sequentially selects the necessary number of free nodes starting with a shortest distance free node centered around the center node. | 10-27-2011 |
20110265093 | Computer System and Program Product - A computer system includes a plurality of processors, a shared resource being used by the processors, and a storage unit in which management information corresponding to the shared resource is stored. The management information includes a semaphore for each OS managing a task which runs on the processors, a queue in which information for specifying a processor which has requested acquisition of the shared resource is stored in series, and a resource counter indicating a remaining number of the shared resources which can be acquired. Each of the processors includes a counter obtaining section that obtains a value of the resource counter, an acquisition decision-making section that makes a decision as to whether or not the shared resource can be acquired, and a resource acquiring section that stores information for specifying the processor in the queue if decided that it can not be acquired. | 10-27-2011 |
20110265094 | LOGIC FOR SYNCHRONIZING MULTIPLE TASKS AT MULTIPLE LOCATIONS IN AN INSTRUCTION STREAM - Logic (also called “synchronizing logic”) in a co-processor (that provides an interface to memory) receives a signal (called a “declaration”) from each of a number of tasks, based on an initial determination of one or more paths (also called “code paths”) in an instruction stream (e.g. originating from a high-level software program or from low-level microcode) that a task is likely to follow. Once a task (also called “disabled” task) declares its lack of a future need to access a shared data, the synchronizing logic allows that shared data to be accessed by other tasks (also called “needy” tasks) that have indicated their need to access the same. Moreover, the synchronizing logic also allows the shared data to be accessed by the other needy tasks on completion of access of the shared data by a current task (assuming the current task was also a needy task). | 10-27-2011 |
20110271285 | MANAGING EXCLUSIVE ACCESS TO SYSTEM RESOURCES - Presented is a method of managing exclusive access to a resource. The method includes determining anticipated wait time, for a task to obtain exclusive access to a resource, and processing the task, depending on the anticipated wait time required to obtain exclusive access to the resource. | 11-03-2011 |
20110276977 | DISTRIBUTED WORKFLOW EXECUTION - A workflow is designated for execution across a plurality of autonomous computational entities automatically. Among other things, the cost of computation is balanced with the cost of communication among computational entities to reduce total execution time of a workflow. In other words, a balance is struck between grouping tasks for execution on a single computational entity and segmenting tasks for execution across multiple computational entities. | 11-10-2011 |
20110276978 | System and Method for Dynamic CPU Reservation - A computer readable storage medium storing a set of instructions executable by a processor. The set of instructions is operable to receive an instruction to reserve a processor of a system including a plurality of processors, receive an instruction to perform a task, determine whether the task has affinity for the reserved processor, execute the task using the reserved processor if the task has affinity for the reserved processor, execute the task using one of the processors other than the reserved processor if the task does not have affinity for the reserved processor. | 11-10-2011 |
20110276979 | Non-Real Time Thread Scheduling - A hard real time (HRT) thread scheduler and a non-real time (NRT) thread scheduler for allocating allocate processor resources among HRT threads and NRT threads are disclosed. The HRT thread scheduler communicates with a HRT thread table including a plurality of entries specifying a temporal order for allocating execution cycles are allocated to one or more HRT threads. If a HRT thread identified by the HRT thread table is unable to be scheduled during the current execution cycle, the NRT thread scheduler accesses an NRT thread table which includes a plurality of entries specifying a temporal order for allocating execution cycles to one or more NRT threads. In an execution cycle where a HRT thread is not scheduled, the NRT thread scheduler identifies an NRT thread from the NRT thread table and an instruction from the identified NRT thread is executed during the execution cycle. | 11-10-2011 |
20110276980 | COMPUTING RESOURCE ALLOCATION DEVICE, COMPUTING RESOURCE ALLOCATION SYSTEM, COMPUTING RESOURCE ALLOCATION METHOD THEREOF AND PROGRAM - Provided is a computing resource allocation device capable of allocating computing resources to accommodate changing activity patterns. The device is equipped with an external environment recognition means that analyzes input values from sensors to specify the current environment, a memory means that stores a table in which the sensors required to specify the environment are correlated, a transition frequency computation means that computes the transition frequency at which a transition is made from an environment to another environment, and a computing resource allocation means that computes the amount of allocation of the computing resources to be used for the analysis based on the current environment by referencing the table and the transition frequency, and that allocates the computing resources for the analysis. | 11-10-2011 |
20110276981 | RUNTIME-RESOURCE MANAGEMENT - A runtime-resource management method, system, and product for managing resources available to application components in a portable device. The method, system, and product provide for loading one or more new application components into a portable device only if maximum runtime resources required by the one or more new application components are available in the portable device assuming loaded application components within the device are using the maximum runtime resources reserved by the loaded application components, reserving maximum runtime resources required by application components when application components are loaded into the portable device, and running loaded application components using only the runtime resources reserved for the loaded application components. | 11-10-2011 |
20110283289 | SYSTEM AND METHOD FOR MANAGING RESOURCES IN A PARTITIONED COMPUTING SYSTEM BASED ON RESOURCE USAGE VOLATILITY - A system and method for managing resources in a partitioned computing system using determined risk of resource saturation is disclosed. In one example embodiment, the partitioned computing system includes one or more partitions. A volatility of resource usage for each partition is computed based on computed resource usage gains/losses associated with each partition. A current resource usage of each partition is then determined. Further, a risk of resource saturation is determined by comparing the computed volatility of resource usage with the determined current resource usage of each partition. The resources in the partitioned computing system are then managed using the determined risk of resource saturation associated with each partition. | 11-17-2011 |
20110283290 | ALLOCATING STORAGE SERVICES - A system and method are provided for allocating storage resources. An exemplary method comprises providing a storage service catalog that lists storage services available for use. The exemplary method also comprises allowing a user to select a subset of the storage services from among the storage services via a self-service software tool. | 11-17-2011 |
20110283291 | MOBILE DEVICE AND APPLICATION SWITCHING METHOD - An object is to switch executions of applications appropriately from one to another when a plurality of applications use a limited resource. A mobile device ( | 11-17-2011 |
20110283292 | ALLOCATION OF PROCESSING TASKS - Methods and systems for allocating processing tasks between a plurality of processing resources ( | 11-17-2011 |
20110283293 | Method and Apparatus for Dynamic Allocation of Processing Resources - A method and apparatus for dynamic allocation of processing resources and tasks, including multimedia tasks. Tasks are queued, available processing resources are identified, and the available processing resources are allocated among the tasks. The available processing resources are provided with functional programs corresponding to the tasks. The tasks are performed using available processing resources to produce resulting data, and the resulting data is passed to an input/output device. | 11-17-2011 |
20110289506 | MANAGEMENT OF COMPUTING RESOURCES FOR APPLICATIONS - The subject matter of this disclosure can be implemented in, among other things, a method. In these examples, the method includes receiving a resource request message to obtain access to a computing resource, and storing the resource request message in a data repository that stores a collection of resource request messages received from a group of applications executing on the computing device. The method may also include responsive to determining that the resource request message received from the first application has a highest priority of the collection of resource request messages, determining whether a second application currently has access to the computing resource, issuing a resource lost message to the second application to indicate that the second application has lost access to the computing resource, and issuing a resource request granted message to the first application, such that the first application obtains access to the computing resource. | 11-24-2011 |
20110289507 | RUNSPACE METHOD, SYSTEM AND APPARATUS - The present invention, known as runspace, relates to the field of computing system management, data processing and data communications, and specifically to synergistic methods and systems which provide resource-efficient computation, especially for decomposable many-component tasks executable on multiple processing elements, by using a metric space representation of code and data locality to direct allocation and migration of code and data, by performing analysis to mark code areas that provide opportunities for runtime improvement, and by providing a low-power, local, secure memory management system suitable for distributed invocation of compact sections of code accessing local memory. Runspace provides mechanisms supporting hierarchical allocation, optimization, monitoring and control, and supporting resilient, energy efficient large-scale computing. | 11-24-2011 |
20110296427 | Resource Allocation During Workload Partition Relocation - A method of relocating a workload partition (WPAR) from a departure logical partition (LPAR) to an arrival LPAR determines an amount of a resource allocated to the relocating WPAR on the departure LPAR and allocates to the relocating WPAR on the arrival LPAR an amount of the resource substantially equal to the amount of the resource allocated to the relocating WPAR on the departure LPAR. | 12-01-2011 |
20110296428 | REGISTER ALLOCATION TO THREADS - A method, system, and computer usable program product for improved register allocation in a simultaneous multithreaded processor. A determination is made that a thread of an application in the data processing environment needs more physical registers than are available to allocate to the thread. The thread is configured to utilize a logical register that is mapped to a memory register. The thread is executed utilizing the physical registers and the memory registers. | 12-01-2011 |
20110296429 | SYSTEM AND METHOD FOR MANAGEMENT OF LICENSE ENTITLEMENTS IN A VIRTUALIZED ENVIRONMENT - A management system and method for a virtualized environment includes a computer entity having a usage limitation based on an entitlement. A resource manager, using a processor and programmed on and executed from a memory storage device, is configured to manage resources in a virtualized environment. An entitlement-usage module is coupled to the resource manager and is configured to track entitlement-related constraints in accordance with changes in the virtualized environment to permit the resource manager to make allocation decisions which include the entitlement-related constraints to ensure that the usage limitation is met for the computer entity. | 12-01-2011 |
20110302589 | METHOD FOR THE DETERMINISTIC EXECUTION AND SYNCHRONIZATION OF AN INFORMATION PROCESSING SYSTEM COMPRISING A PLURALITY OF PROCESSING CORES EXECUTING SYSTEM TASKS - An information processing system includes two processing cores. The execution of an application by the system includes the execution of application tasks and the execution of system tasks, and the system includes a micro-kernel executing the system tasks, which are directly linked to hardware resources. The processing system includes a computation part of the micro-kernel executing system tasks relating to the switching of the tasks on a first core, and a control part of the micro-kernel executing, on a second core, system tasks relating to the control of the task allocation order on the first core. | 12-08-2011 |
20110302590 | PROCESS ALLOCATION SYSTEM, PROCESS ALLOCATION METHOD, PROCESS ALLOCATION PROGRAM - Communication performance of inter-process communication in enhanced for the entire program processing. A process allocation system is provided with a processor which executes a process including a process for performing mutual inter-process communication and holding a logical process placement system, and a process allocation module for allocating each process to the processor, wherein the process allocation module is provided with an inter-processor communication capacity acquisition module for acquiring the communication performance of inter-processor communication which the processor performs with other different processor, a module for specifying the dimensional direction in which the communication traffic of inter-process communication is high in the logical process placement system, and a module for determining a processor having a higher communication performance of inter-processor communication as the allocation destination of a process which is set in the dimensional direction of higher inter-process communication traffic. | 12-08-2011 |
20110302591 | SYSTEM AND METHOD FOR DATA SYNCHRONIZATION FOR A COMPUTER ARCHITECTURE FOR BROADBAND NETWORKS - A computer architecture and programming model for high speed processing over broadband networks are provided. The architecture employs a consistent modular structure, a common computing module and uniform software cells. The common computing module includes a control processor, a plurality of processing units, a plurality of local memories from which the processing units process programs, a direct memory access controller and a shared main memory. A synchronized system and method for the coordinated reading and writing of data to and from the shared main memory by the processing units also are provided. A processing system for processing computer tasks is also provided. A first processor is of a first processor type and a number of second processors are of a second processor type. One of the second processors manages process scheduling of computing tasks by providing tasks to at least one of the first and second processors. | 12-08-2011 |
20110307898 | METHOD AND APPARATUS FOR EFFICIENTLY DISTRIBUTING HARDWARE RESOURCE REQUESTS TO HARDWARE RESOURCE OFFERS - A method and an apparatus provide for efficiently distributing hardware resource requests to hardware resource offers. Applying the method and apparatus, an allocation of hardware resources is possible in a highly efficient and effective way. Therefore, a system architecture is introduced, which provides components for determining negotiation approaches as well as splitting complex allocation problems into single and independent allocation problems. The method and apparatus find application in a variety of technical domains and especially in the domain of hardware resource allocation as well as agent technology. | 12-15-2011 |
20110307899 | COMPUTING CLUSTER PERFORMANCE SIMULATION USING A GENETIC ALGORITHM SOLUTION - Illustrated is a system and method that includes identifying a search space based upon available resources, the search space to be used to satisfy a resource request. The system and method also includes selecting from the search space an initial candidate set, each candidate of the candidate set representing a potential resource allocation to satisfy the resource request. The system and method further includes assigning a fitness score, based upon a predicted performance, to each member of the candidate set. The system and method also includes transforming the candidate set into a fittest candidate set, the fittest candidate set having a best predicted performance to satisfy the resource request. | 12-15-2011 |
20110307900 | CHANGING STREAMING MEDIA QUALITY LEVEL BASED ON CURRENT DEVICE RESOURCE USAGE - Streaming media is received from a source system. A current overall resource usage of a resource of the device (such as a CPU or memory of the device) is obtained. A check is made as to whether the current overall resource usage exceeds a resource threshold value. If the current overall resource usage exceeds the resource threshold value, then an indication is provided to the source system to reduce a quality level of the streaming media. The streaming media is received from the source system at the reduced quality level until there is sufficient resource capacity at the device to increase the quality level. | 12-15-2011 |
20110307901 | SYSTEM AND METHOD FOR INTEGRATING CAPACITY PLANNING AND WORKLOAD MANAGEMENT - A system for integrating resource capacity planning and workload management, implemented as programming on a suitable computing device, includes a simulation module that receives data related to execution of the workloads, resource types, numbers, and capacities, and generates one or more possible resource configuration options; a modeling module that receives the resource configuration options and determines, based on one or more specified criteria, one or more projected resource allocations among the workloads; and a communications module that receives the projected resource allocations and presents the projected resource allocations for review by a user. | 12-15-2011 |
20110307902 | ASSIGNING TASKS IN A DISTRIBUTED SYSTEM - A method and apparatus are provided for assigning tasks in a distributed system. The method comprises indicating to one or more remote systems in the distributed system that a task is available for processing based on a list identifying the one or more remote systems. The method further comprises receiving at least one response from the one or more remote systems capable of performing the task based on the indication. The method comprises allowing at least one of the remote systems to perform the task based on the at least one received response. | 12-15-2011 |
20110314478 | Allocation and Control Unit - An allocation and control unit for allocating execution threads for a task to a plurality of auxiliary processing units and for controlling the parallel execution of said execution threads by said auxiliary processing units, the task being executed in a sequential manner by a main processing unit. The allocation and control unit includes means for managing auxiliary logical processing units, means for managing auxiliary physical processing units each corresponding to an auxiliary processing unit, and means for managing the auxiliary processing units. The means for managing the auxiliary processing units include means for allocating an auxiliary logical processing unit to an execution thread to be executed, and means for managing the correspondence between the auxiliary logical processing units and the auxiliary physical processing units. The auxiliary processing units execute in parallel the execution threads for the task by way of the auxiliary logical processing units, which are allocated as late as possible and freed as early as possible. | 12-22-2011 |
20110321055 | TRANSPORTATION ASSET MANAGER - Systems and methods of visualizing assets are disclosed that include registering assets into a system, creating a correlation between the location of the assets and a physical representation of the operational area, determining the status and class of the assets, selecting at least one asset, and exerting control over the at least one asset. | 12-29-2011 |
20120005685 | Information Processing Grid and Method for High Performance and Efficient Resource Utilization - System and method are proposed for intelligent assignment of submitted information processing jobs to computing resources in an information processing grid based upon real-time measurements of job behavior and predictive analysis of job throughput and computing resource consumption of the correspondingly generated workloads. The job throughput and computing resource utilization are measured and analyzed in multiple parametric dimensions. The analyzed workload may work with a job scheduling system to provide optimized job dispatchment to computing resources across the grid. Application of a parametric weighting system to the parametric dimensions makes the optimization system dynamic and flexible. Through adjustment of these parametric weights, the focus of the optimization can be adjusted dynamically to support the immediate operational goals of the system as a whole. | 01-05-2012 |
20120011516 | Method for the administration of resources - A method for the administration of resources, in which classes or instances, respectively, are assigned to the resources and a program receives a rule assigned to the class or instance, respectively, and applies it to the resource. It is made sure that only rules assigned to the class or instance, respectively, are applied on the resource. In alternative methods, only rules are applied on the resource, which were accepted by a verification rule assigned to the resource. | 01-12-2012 |
20120011517 | GENERATION OF OPERATIONAL POLICIES FOR MONITORING APPLICATIONS - Example embodiments relate to generation of operational policies for monitoring applications. In example embodiments, data generated based on decomposition of a Service Level Agreement (SLA) is received. Furthermore, in example embodiments, an operational policy is generated using the decomposition data. The operational policy may be used to control operation of a monitoring application. | 01-12-2012 |
20120011518 | SHARING WITH PERFORMANCE ISOLATION BETWEEN TENANTS IN A SOFTWARE-AS-A SERVICE SYSTEM - An apparatus hosting a multi-tenant software-as-a-service (SaaS) system maximizes resource sharing capability of the SaaS system. The apparatus receives service requests from multiple users belonging to different tenants of the multi-tenant SaaS system. The apparatus partitions the resources in the SaaS system into different resource groups. Each resource group handles a category of the service requests. The apparatus estimates costs of the service requests of the users. The apparatus dispatches service requests to resource groups according to the estimated costs, whereby the resources are shared, among the users, without impacting each other. | 01-12-2012 |
20120017218 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node. | 01-19-2012 |
20120017219 | Multi-CPU Domain Mobile Electronic Device and Operation Method Thereof - A multi-CPU domain mobile electronic device, includes: a first CPU domain, comprising at least a first migration agent unit, the first migration agent unit detecting a task migration condition, determining whether to migrate a migratable task, and sending an associated migration event, and a second CPU domain, comprising at least a second migration agent unit, the second migration agent unit receiving the migratable task from the first migration agent unit. | 01-19-2012 |
20120023503 | MANAGEMENT OF COMPUTING RESOURCES FOR APPLICATIONS - The subject matter of this disclosure can be implemented in, among other things, a method. In these examples, the method includes receiving a resource request message to obtain access to a computing resource, and storing the resource request message in a data repository that stores a collection of resource request messages received from a group of applications executing on the computing device. The method may also include responsive to determining that the resource request message received from the first application has a highest priority of the collection of resource request messages, determining whether a second application currently has access to the computing resource, issuing a resource lost message to the second application to indicate that the second application has lost access to the computing resource, and issuing a resource request granted message to the first application, such that the first application obtains access to the computing resource. | 01-26-2012 |
20120030683 | Method of forming a personal mobile grid system and resource scheduling thereon - The method of forming a personal mobile grid system and resource scheduling thereon provides for the formation of a personal network, a personal area network or the like having a computational grid superimposed thereon. Resource scheduling in the personal mobile grid is performed through an optimization model based upon the nectar acquisition process of honeybees. | 02-02-2012 |
20120030684 | RESOURCE ALLOCATION - At least one candidate allocation time period is determined according to a resource benefit time step function. The resource benefit does not vary with time in the at least one candidate allocation time period. Resources and relations between the resources are converted into sub-resource groups according to the resource cost time step function. Each of the sub-resource groups comprise sub-resources that correspond to the resources and relations between the sub-resources. The resource benefits and resource costs of the sub-resources do not vary with time. With respect to the at least one candidate allocation time period, the sub-resource groups are input into a resource schedule optimizer to obtain optimized results with respect to the sub-resource groups. An optimized result, with respect to the at least one candidate allocation time period, is obtained from the optimized results with respect to the sub-resource groups. | 02-02-2012 |
20120030685 | SYSTEM AND METHOD FOR PROVIDING DYNAMIC PROVISIONING WITHIN A COMPUTE ENVIRONMENT - The disclosure relates to systems, methods and computer-readable media for dynamically provisioning resources within a compute environment. The method aspect of the disclosure comprises A method of dynamically provisioning resources within a compute environment, the method comprises analyzing a queue of jobs to determine an availability of compute resources for each job, determining an availability of a scheduler of the compute environment to satisfy all service level agreements (SLAs) and target service levels within a current configuration of the compute resources, determining possible resource provisioning changes to improve SLA fulfillment, determining a cost of provisioning; and if provisioning changes improve overall SLA delivery, then re-provisioning at least one compute resource. | 02-02-2012 |
20120036513 | METHOD TO ASSIGN TRAFFIC PRIORITY OR BANDWIDTH FOR APPLICATION AT THE END USERS-DEVICE - A resource reservation method in a network, where the allocation of network bandwidth to each application connected to the network is determined by the end user is provided herewith. | 02-09-2012 |
20120036514 | METHOD AND APPARATUS FOR A COMPILER AND RELATED COMPONENTS FOR STREAM-BASED COMPUTATIONS FOR A GENERAL-PURPOSE, MULTIPLE-CORE SYSTEM - A method and system of compiling and linking source stream programs for efficient use of multi-node devices. The system includes a compiler, a linker, a loader and a runtime component. The process converts a source code stream program to a compiled object code that is used with a programmable node based computing device having a plurality of processing nodes coupled to each other. The programming modules include stream statements for input values and output values in the form of sources and destinations for at least one of the plurality of processing nodes and stream statements that determine the streaming flow of values for the at least one of the plurality of processing nodes. The compiler converts the source code stream based program to object modules, object module instances and executables. The linker matches the object module instances to at least one of the multiple cores. The loader loads the tasks required by the object modules in the nodes and configure the nodes matched with the object module instances. The runtime component runs the converted program. | 02-09-2012 |
20120042319 | Scheduling Parallel Data Tasks - A method for allocating parallel, independent, data tasks includes receiving data tasks, each of the data tasks having a penalty function, determining a generic ordering of the data tasks according to the penalty functions, wherein the generic ordering includes solving an aggregate objective function of the penalty functions, the method further including determining a schedule of the data tasks given the generic ordering, which packs the data tasks to be performed. | 02-16-2012 |
20120042320 | SYSTEM AND METHOD FOR DYNAMIC RESCHEDULING OF MULTIPLE VARYING RESOURCES WITH USER SOCIAL MAPPING - A system and method for scheduling resources includes a memory storage device having a resource data structure stored therein which is configured to store a collection of available resources, time slots for employing the resources, dependencies between the available resources and social map information. A processing system is configured to set up a communication channel between users, between a resource owner and a user or between resource owners to schedule users in the time slots for the available resources. The processing system employs social mapping information of the users or owners to assist in filtering the users and owners and initiating negotiations for the available resources. | 02-16-2012 |
20120042321 | DYNAMICALLY ALLOCATING META-DATA REPOSITORY RESOURCES - The apparatus for dynamically allocating resources used in a meta-data repository includes a tracking module to track resources allocated to a meta-data repository, the meta-data repository comprising a repository that stores meta-data related to a computer system. An adjustment evaluation module evaluates repository usage of the resources allocated to the meta-data repository and ascertains whether a resource adjustment is desirable. An adjustment determination module determines desirable adjustments to the resources available to the meta-data repository. An allocation module adjusts resources allocated to the meta-data repository in accordance with the adjustment determination module. Adjusting resources includes changing a number of strings allocated to handle concurrent meta-data repository I/O requests. | 02-16-2012 |
20120047511 | THROTTLING STORAGE INITIALIZATION FOR DATA DESTAGE - Method, system, and computer program product embodiments for throttling storage initialization for data destage in a computing storage environment are provided. An implicit throttling operation is performed by limiting a finite resource of a plurality of finite resources available to a background initialization process, the background initialization process adapted for performing the storage initialization ahead of a data destage request. If a predefined percentage of the plurality of finite resources is utilized, at least one of the plurality of finite resources is deferred to a foreground process that is triggered by the data destage request, the foreground process adapted to perform the storage initialization ahead of a data destage performed pursuant to the data destage request. An explicit throttling operation is performed by examining a snapshot of storage activity occurring outside the background initialization process. | 02-23-2012 |
20120047512 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR SELECTING A RESOURCE BASED ON A MEASURE OF A PROCESSING COST - Methods and systems are described for selecting a resource based on a measure of a processing cost. Resource information is received identifying a first resource and a second resource for processing by a program component. One or more of a first measure of a specified processing cost for the processing of the first resource and a second measure of the processing cost for the processing of the second resource is determined. One of the first resource and the second resource is selected based on at least one of the first measure and the second measure. The selected one of the first resource and the second resource is identified to the program component for processing. | 02-23-2012 |
20120047513 | WORK PROCESSING APPARATUS FOR SCHEDULING WORK, CONTROL APPARATUS FOR SCHEDULING ACTIVATION, AND WORK SCHEDULING METHOD IN A SYMMETRIC MULTI-PROCESSING ENVIRONMENT - A work scheduling technology in a symmetric multi-processing (SMP) environment is provided. A work scheduling function for a SMP environment is implemented in a work processing apparatus, thereby reducing the scheduling overhead, and enhancing the efficiency in use of CPU resources and improving the CPU performance. | 02-23-2012 |
20120060165 | CLOUD PIPELINE - Cloud service providers are selected to perform a data processing job based on information about the cloud service providers and criteria of the job. A plan for a cloud pipeline for performing the job is designed based on the information about the cloud service providers. The plan comprises processing stages each of which indicates processing upon a subset of a data set of the job. Allocated resources of the set of cloud service providers are mapped to the processing stages. Instructions and software images based on the plan are generated. The instructions and the software images implement the cloud pipeline for performing the data processing job. The instructions and the software images are transmitted to machines of the cloud service providers. The machines and the performing of the job are monitored. If the monitoring detects a failure, then the cloud pipeline is adapted to the failure. | 03-08-2012 |
20120060166 | Day management using an integrated calendar - A method and system for day management using an integrated calendar is disclosed. A user inputs his time available during a period and enters details on time-specific events to be performed during that period. The system fetches information entered by user and stores the details in the integrated calendar. Tasks to be performed are stored on task list in order of priority as entered by the user. The system determines free time available with the user for performing the tasks by subtracting the time allocated for the events from the time available. Further, tasks are allocated time schedules by allocating that free time to the duration of each task, as per priority. If a task cannot be performed in a particular time slot within the time period, the task may be split into multiple smaller tasks and performed at different time slots that are available. | 03-08-2012 |
20120060167 | METHOD AND SYSTEM OF SIMULATING A DATA CENTER - A system and method for optimizing the dynamic behavior of a multi-tier data center is described, wherein the data center is simulated along with the resources in the form of hardware and software and the transaction process workloads are simulated to test the resources or the transaction process. The system requires the client computing device and a backend server to have the capabilities to host simulated hardware, complex software applications platforms, and to perform large scale simulations using these resources. The method includes securing parameter inputs from the client that defined the data center resources and the transaction process to be tested, generating various workload simulations, testing the simulations and provisioning the resources, thereby obtaining an optimized dynamic data center simulation of data center resources and the transaction processes. | 03-08-2012 |
20120060168 | VIRTUALIZATION SYSTEM AND RESOURCE ALLOCATION METHOD THEREOF - A virtualization system for supporting at least two operating systems and resource allocation method of the virtualization system are provided. The method includes allocating resources to the operating systems, calculating, when one of the operating systems is running, workloads for each operating system, and adjusting resources allocated to the operating systems according to the calculated workloads. The present invention determines the workloads of a plurality of operating systems running in the virtualization system and allocates time resources dynamically according to the variation of the workloads. | 03-08-2012 |
20120060169 | SYSTEMS AND METHODS FOR RESOURCE CONTROLLING - A resource controller that includes a first buffer configured to store requests of a first predefined category having a first priority. In addition, the resource controller includes at least a second buffer configured to store requests of a second predefined category having a second priority where the first priority is set such that processing requests of the first category has priority over processing the requests of the second category. Also, the resource controller includes a mechanism configured to block the requests of the first category when a predefined condition is met. | 03-08-2012 |
20120060170 | Method and scheduler in an operating system - Method and scheduler in an operating system, for scheduling processing resources on a multi-core chip. The multi-core chip comprises a plurality of processor cores. The operating system is configured to schedule processing resources to an application to be executed on the multi-core chip. The method comprises allocating a plurality of processor cores to the application. Also, the method comprises switching off another processor core allocated to the application, not executing the sequential portion of the application, when a sequential portion of the application is executing on only one processor core. In addition, the method comprises increasing the frequency of the one processor core executing the application to the second frequency, such that the processing speed is increased more than predicted by Amdahl's law. | 03-08-2012 |
20120066686 | DEMAND RESPONSE SYSTEM INCORPORATING A GRAPHICAL PROCESSING UNIT - A system and approach for utilizing a graphical processing unit in a demand response program. A demand response server may have numerous demand response resources connected to it. The server may have a main processor and an associated memory, and a graphic processing unit connected to the main processor and memory. The graphic processing unit may have numerous cores which incorporate processing units and associated memories. The cores may concurrently process demand response information and rules of the numerous resources, respectively, and provide signal values to the main processor. The main processor may the provide demand response signals based at least partially on the signal values, to each of the respective demand response resources. | 03-15-2012 |
20120066687 | RESOURCE MANAGEMENT SYSTEM - A resource management system for managing resources in a computing and/or communications resource infrastructure is disclosed. The system comprises a database for storing a model of the resource infrastructure. The database defines a set of resources provided by the infrastructure; a set of software applications operating within the infrastructure and utilising resources; and associations between given applications in the model and given resources to indicate utilisation of the given resources by the given applications. The model can be used to perform resource utilisation analysis and failure impact analysis. | 03-15-2012 |
20120072917 | METHOD AND APPARATUS FOR DISTRIBUTING COMPUTATION CLOSURES - An approach is provided for backend based computation closure oriented distributed computing. A computational processing support infrastructure receives a request for specifying one or more processes executing on a device for distribution over a computation space. The computational processing support infrastructure also causes, at least in part, serialization of the one or more processes as one or more closure primitives, the one or more closure primitives representing computation closures of the one or more processes. The computational processing support infrastructure further causes, at least in part, distribution of the one or more closure primitives over the computation space based, at least in part, on a cost function. | 03-22-2012 |
20120072918 | GENERATION OF GENERIC UNIVERSAL RESOURCE INDICATORS - Various arrangements for creating and using generic universal resource indicators are presented. To create a generic universal resource indicator, one or more parameters of a universal resource indicator may be identified. An interface that permits a parameter of the one or more parameters to be selected and mapped to a variable may be presented. A selection of the parameter for mapping may be received. An indication of the variable to map to the parameter of the selection may also be received. The generic universal resource indicator having a generic parameter corresponding to the parameter of the selection may be created. | 03-22-2012 |
20120079492 | VECTOR THROTTLING TO CONTROL RESOURCE USE IN COMPUTER SYSTEMS - Embodiments are provided for managing the system performance of resources performing tasks in response to task requests from tenants. In one aspect, a system that comprises at least one resource configured to perform at least one admitted task with an impact under the control of a computer system. The computer system provides services to more than one tenant. The computer system comprises a strategist configured to assess the impact of the admitted task to create a cost function vector containing multiple cost function specifications and a budget policy vector containing multiple budget policies and an actuator. The actuator receives the cost function vector and the budget policy vector from the strategist, receives a task request one of the more than one tenants, and calculates cost functions based upon the cost function vector to predict the impact of the task request on the resources for each of the task requests. The actuator throttles the task requests based upon the budget policies for the impact on the resources to create at least one of the admitted task performed by the resource and a delayed task request. | 03-29-2012 |
20120079493 | USE OF CONSTRAINT-BASED LINEAR PROGRAMMING TO OPTIMIZE HARDWARE SYSTEM USAGE - A computer implemented method, system, and/or computer program product optimizes systems usage. A work request is decomposed into units of work. A processor selectively sends each unit of work from the work request to either a first system or a second system for execution, depending on a work constraint on each unit of work and/or system constraints on the first and second systems. | 03-29-2012 |
20120079494 | System And Method For Maximizing Data Processing Throughput Via Application Load Adaptive Scheduling And Content Switching - The invention enables dynamic, software application load adaptive optimization of data processing capacity allocation on a shared processing hardware among a set of application software programs sharing said hardware. The invented techniques allow multiple application software programs to execute in parallel on a shared CPU, with application ready-to-execute status adaptive scheduling of CPU cycles and context switching between applications done in hardware logic, without a need for system software involvement. The invented data processing system hardware dynamically optimizes allocation of its processing timeslots among a number of concurrently running processing software applications, in a manner adaptive to realtime processing loads of the applications, without using the CPU capacity for any non-user overhead tasks. The invention thereby achieves continuously maximized data processing throughput for variable-load processing applications, while ensuring that any given application gets at least its entitled share of the processing system capacity whenever so demanded. | 03-29-2012 |
20120079495 | MANAGING ACCESS TO A SHARED RESOURCE IN A DATA PROCESSING SYSTEM - Processes requiring access to shared resources are adapted to issue a reservation request, such that a place in a resource access queue, such as one administered by means of a semaphore system, can be reserved for the process. The reservation is issued by a Reservation Management module at a time calculated to ensure that the reservation reaches the head of the queue as closely as possible to the moment at which the process actually needs access to the resource. The calculation may be made on the basis of priority information concerning the process itself, and statistical information gathered concerning historical performance of the queue. | 03-29-2012 |
20120079496 | COMPUTING SYSTEM AND JOB ALLOCATION METHOD - A computing system includes a plurality of computing apparatuses, a job allocation information storage unit, a position information storage unit, and a job allocation unit. The job allocation information storage stores job allocation information indicating job allocation status of each of the plurality of computing apparatuses. The job allocation status is one of in an active state and in an inactive state. The position information storage unit stores position information indicating relative positions of the plurality of computing apparatuses. The job allocation unit refers to the job allocation information and the position information, selects a candidate inactive computing apparatus on the basis of a distance between each pair of an inactive computing apparatus and an active computing apparatus, and allocates a job to the candidate inactive computing apparatus. | 03-29-2012 |
20120079497 | Predicting Resource Requirements for a Computer Application - A resource consumption model is created for a software application, making it possible to predict the resource requirements of the application in different states. The model has a structure corresponding to that of the application itself, and is interpreted to some degree in parallel with the application, but each part of the model is interpreted in less time than it takes to complete the corresponding part of the application, so that resource requirement predictions are available in advance. The model may be interpreted in a look-ahead mode, wherein different possible branches of the model are interpreted so as to obtain resource requirement predictions for the application after completion of the present step. The model may be derived automatically from the application at design or compilation, and populated by measuring the requirements of the application in response to test scenarios in a controlled environment. | 03-29-2012 |
20120079498 | METHOD AND APPARATUS FOR DYNAMIC RESOURCE ALLOCATION OF PROCESSING UNITS - A method and apparatus for dynamic resource allocation in a system having at least one processing unit are disclosed. The method of dynamic resource allocation includes receiving information on a task to which resources are allocated and partitioning the task into one or more task parallel units; converting the task into a task block having a polygonal shape according to expected execution times of the task parallel units and dependency between the task parallel units; allocating resources to the task block by placing the task block on a resource allocation plane having a horizontal axis of time and a vertical axis of processing units; and executing the task according to the resource allocation information. Hence, CPU resources and GPU resources in the system can be used in parallel at the same time, increasing overall system efficiency. | 03-29-2012 |
20120084785 | RESOURCE RESERVATION - Technologies are generally described for systems and methods for requesting a reservation between a first and a second processor. In some examples, the method includes receiving a reservation request at the second processor from the first processor. The reservation request may include an identification of a resource in communication with the second processor, a time range, first key information relating to the first processor, and a first signature of the first processor based on the first key information. In some examples, the method includes verifying, by the second processor, the reservation request based on the first key information and the first signature. In some examples, the method includes determining, by the second processor, whether to accept the reservation request. | 04-05-2012 |
20120084786 | JOB EXECUTING SYSTEM, JOB EXECUTING DEVICE AND COMPUTER-READABLE MEDIUM - An image forming device includes a monitoring service performing unit and a service process performing instructing unit. The monitoring service performing unit acquires operation state information including index data that represents a service processing function mounted in the corresponding server and an operation state of the corresponding server from each server by starting a monitoring service when an accepted job is performed. The service process performing instructing unit instructs a low-load server to start a corresponding service processing function when the load on a server in which the service processing function used for executing the job is mounted is determined to be high. The server acquires the corresponding service processing function from the server in which the corresponding service processing function is mounted when being instructed to start an operation and thereafter performs the corresponding service process in accordance with the performance instruction transmitted from the image forming device. | 04-05-2012 |
20120084787 | APPARATUS AND METHOD FOR CONTROLLING A RESOURCE UTILIZATION POLICY IN A VIRTUAL ENVIRONMENT - An apparatus and method for controlling a resource utilization policy in a virtual environment are provided. The apparatus may increase network throughput by dynamically adjusting the resource utilization policies of a driver domain that can directly access a shared device, and a guest driver that cannot directly access the shared device. In addition, the apparatus may improve the efficiency of the use of CPU resources by appropriately adjusting the CPU occupancy rates of the driver and guest domains. | 04-05-2012 |
20120089986 | PROCESS POOL OF EMPTY APPLICATION HOSTS TO IMPROVE USER PERCEIVED LAUNCH TIME OF APPLICATIONS - Various embodiments enable a device to create a pool of at least one empty application. An empty application can be configured to contain resources that are common across one or more other applications and initialize the resources for the one or more other applications effective to reduce startup time of the other applications. In one or more embodiments, an empty application can further be populated with the one or more other applications effective to cause the one or more other applications to execute. Alternately or additionally, a device can be monitored for an idle state, and, upon determining the device is in the idle state, at least one empty application can be created. | 04-12-2012 |
20120102498 | RESOURCE MANAGEMENT USING ENVIRONMENTS - Apparatus, systems, and methods may operate to receive time-based reservation requests for predefined resource environments comprising resource types that include hardware, software, and data, among others. Additional activities may include detecting a conflict between at least one of the resource types in a first one of the predefined resource environments and at least one of the resource types in a second one of the predefined resource environments, and resolving the conflict in favor of the first one of the predefined resource environments by reserving additional resource elements in a cloud computing architecture and/or reserving a less capable version of the second one of the predefined resource environments. Additional apparatus, systems, and methods are disclosed. | 04-26-2012 |
20120102499 | OPTIMIZING THE PERFORMANCE OF HYBRID CPU SYSTEMS BASED UPON THE THREAD TYPE OF APPLICATIONS TO BE RUN ON THE CPUs - A hybrid CPU system wherein the plurality of processors forming the hybrid system are initially undifferentiated by type or class. Responsive to the sampling of the threads of a received and loaded computer application to be executed, the function of at least one of the processors is changed so that the threads of the sampled application may be most effectively processed/run on the hybrid system. | 04-26-2012 |
20120102500 | NUMA AWARE SYSTEM TASK MANAGEMENT - Task management in a Non-Uniform Memory Access (NUMA) architecture having multiple processor cores is aware of the NUMA topology in task management. As a result memory access penalties are reduced. Each processor is assigned to a zone allocated to a memory controller. The zone assignment is based on a cost function. In a default mode a thread of execution attempts to perform work in a queue of the same zone as the processor to minimize memory access penalties. Additional work stealing rules may be invoked if there is no work for a thread to perform from its default zone queue. | 04-26-2012 |
20120110587 | Methods and apparatuses for accumulating and distributing processing power - Calculating and distributing resources of at least one electronic device over a network. | 05-03-2012 |
20120110588 | UNIFIED RESOURCE MANAGER PROVIDING A SINGLE POINT OF CONTROL - An integrated hybrid system is provided. The hybrid system includes compute components of different types and architectures that are integrated and managed by a single point of control to provide federation and the presentation of the compute components as a single logical computing platform. | 05-03-2012 |
20120110589 | TECHNIQUE FOR EFFICIENT PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT THROUGH INTELLIGENT DYNAMIC LOAD BALANCING - A method for verifying software includes monitoring a resource queue and a job queue, determining whether the resource queue and the job queue contain entries, and if both the resource queue and the job queue contain entries, then applying a scheduling policy to select a job, selecting a worker node as a best match for the characteristics of the job among the resource queue entries, assigning the job to the worker node, assigning parameters to the worker node for a job creation policy for creating new jobs in the job queue while executing the job, and assigning parameters to the worker node for a termination policy for halting execution of the job. The resource queue indicates worker nodes available to verify a portion of code. The job queue indicates one or more jobs to be executed by a worker node. A job includes a portion of code to be verified. | 05-03-2012 |
20120110590 | EFFICIENT PARTIAL COMPUTATION FOR THE PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT - An electronic device includes a memory, a processor coupled to the memory, and one or more policies stored in the memory. The policies include a resource availability policy determining whether the processor should continue evaluating the software, and a job availability policy determining whether new jobs will be created for unexplored branches. The processor is configured to receive a job to be executed, evaluate the software, select a branch to explore and store an initialization sequence of one or more unexplored branches if a branch in the software is encountered, evaluate the job availability policy, decide whether to create a job for each of the unexplored branches based on the job availability policy, evaluate the resource availability policy, and decide whether to continue evaluating the software at the branch selected to explore based on the resource availability policy. The job indicates of a portion of software to be evaluated. | 05-03-2012 |
20120110591 | SCHEDULING POLICY FOR EFFICIENT PARALLELIZATION OF SOFTWARE ANALYSIS IN A DISTRIBUTED COMPUTING ENVIRONMENT - A method for verifying software includes accessing a job queue, accessing a resource queue, and assigning a job from the job queue to a resource from the resource queue if an addition is made to the a job queue or to a resource queue. The job queue includes an indication of one or more jobs to be executed by a worker node, each job indicating a portion of a code to be verified. The resource queue includes an indication of a one or more worker nodes available to verify a portion of software. The resource is selected by determining the best match for the characteristics of the selected job among the resources in the resource queue. | 05-03-2012 |
20120110592 | Autonomic Self-Tuning Of Database Management System In Dynamic Logical Partitioning Environment - An automated monitor monitors one or more resource parameters in a logical partition running a database application in a logically partitioned data processing host. The monitor initiates dynamic logical partition reconfiguration in the event that the parameters vary from predetermined parameter values. In particular, the monitor can initiate removal of resources if one of the resource parameters is being underutilized and initiate addition of resources if one of the resource parameters is being overutilized. The monitor can also calculate an amount of resources to be removed or added. The monitor can interact directly with a dynamic logical partition reconfiguration function of the data processing host or it can utilize an intelligent intermediary that listens for a partition reconfiguration suggestion from the monitor. In the latter configuration, the listener can determine where available resources are located and attempt to fully or partially satisfy the resource needs suggested by the monitor. | 05-03-2012 |
20120110593 | System and Method for Migration of Data - Systems and methods for data migration are disclosed. A method may include allocating a destination storage resource to receive migration data. The method may also include assigning the destination storage resource a first identifier value equal to an identifier value associated with a source storage resource. The method may additionally include assigning the source storage resource a second identifier value different than the first identifier value. The method may further include migrating data from the source storage resource to the destination storage resource. | 05-03-2012 |
20120124592 | METHODS OF PERSONALIZING SERVICES VIA IDENTIFICATION OF COMMON COMPONENTS - Methods and arrangements for more efficiently enhancing the personalization and customization of services while avoiding an undue overburdening of personnel, infrastructure or resources. An input service component comprising a plurality of tasks is assimilated, similarity among the tasks is determined, and output service components are routed to resources based on similarity among the tasks, the service components each comprising a subgroup of similar tasks. | 05-17-2012 |
20120131589 | METHOD FOR SCHEDULING UPDATES IN A STREAMING DATA WAREHOUSE - A method for scheduling atomic update jobs to a streaming data warehouse includes allocating execution tracks for executing the update jobs. The tracks may be assigned a portion of available processor utilization and memory. A database table may be associated with a given track. An update job directed to the database table may be dispatched to the given track for the database table, when the track is available. When the track is not available, the update job may be executed on a different track. Furthermore, pending update jobs directed to common database tables may be combined and separated in certain transient conditions. | 05-24-2012 |
20120131590 | MANAGING VIRTUAL FUNCTIONS OF AN INPUT/OUTPUT ADAPTER - A computer implemented method may include identifying allocations for each virtual function of a plurality of virtual functions that are provided via an input/output adapter. The computer implemented method may further include determining a range associated with each group of a plurality of groups based on the identified allocations. The computer implemented method may also include associating each virtual function with a group of the plurality of groups based on the range associated with the group. Where at least one group of the plurality of groups is empty, and where one or more groups of the plurality of groups has two or more virtual functions associated with the one or more groups, the computer implemented method may include distributing the two or more virtual functions to the at least one empty group. The computer implemented method may further include transferring the plurality of virtual functions from each group to a corresponding category at the input/output adapter. | 05-24-2012 |
20120131591 | METHOD AND APPARATUS FOR CLEARING CLOUD COMPUTE DEMAND - Provided are systems and methods for simplifying cloud compute markets. A compute marketplace can be configured to determine, automatically, attributes and/or constraints associated with a job without requiring the consumer to provide them. The compute marketplace provides a clearing house for excess compute resources which can be offered privately or publically. The compute environment can be further configured to optimize job completion across multiple providers with different execution formats, and can also factor operating expense of the compute environment into the optimization. The compute marketplace can also be configured to monitor jobs and/or individual job partitions while their execution is in progress. The compute marketplace can be configured to dynamically redistribute jobs/job partitions across providers when, for example, cycle pricing changes during execution, providers fail to meet defined constraints, excess capacity becomes available, compute capacity becomes unavailable, among other options. | 05-24-2012 |
20120131592 | PARALLEL COMPUTING METHOD FOR PARTICLE BASED SIMULATION AND APPARATUS THEREOF - Disclosed are a parallel computing method for particle based simulation that may decrease a calculation delay due to data communication by simultaneously performing the data communication and a simulation calculation and increasing parallelism of a task, and an apparatus thereof. The parallel computing method for particle based simulation according to an exemplary embodiment to the present invention may include decomposing the whole calculation domain of a manager node into a plurality of sub-domains based on a grid macro-cell based orthogonal recursive bisection (ORB) method; allocating the decomposed sub-domains to worker nodes; and performing load balancing with respect to the worker nodes. | 05-24-2012 |
20120137303 | COMPUTER SYSTEM - Provided is a computer system capable of reliably eliminating duplicated data regardless of the size of the data write unit from the host computer to the storage subsystem or the management unit size in the elimination of duplicated data. | 05-31-2012 |
20120151491 | Redistributing incomplete segments for processing tasks in distributed computing - A method or system for redistributing incomplete segments for processing tasks by generating a model based on resources of a plurality of separate electronic devices; simulating an assessment task to determine a computation time for the assessment task according to the model; updating the model to optimize the computation time based on a dynamic availability of the resources and a processing requirement of a live task; distributing task segments for processing the live task based on the updated model; and dynamically redistributing incomplete segments for processing the live task by further updating the model based on the dynamic availability of the resources. | 06-14-2012 |
20120151492 | MANAGEMENT OF COPY SERVICES RELATIONSHIPS VIA POLICIES SPECIFIED ON RESOURCE GROUPS - Exemplary method, system, and computer program embodiments for prescribing copy services relationships for storage resources organized into a plurality of resource groups in a computing storage environment are provided. In one embodiment, at least one additional resource group attribute is defined to specify at least one policy prescribing a copy services relationship between two of the storage resources. Pursuant to a request to establish the copy services relationship between the two storage resources, each of the two storage resources exchange resource group labels corresponding to which of the plurality of resource groups the two storage resources are assigned, and each of the two storage resources validates the requested copy services relationship and the resource group label of an opposing one of the two storage resources against the individual ones of the at least one additional resource group attribute in the resource group object to determine if the copy services relationship may proceed. | 06-14-2012 |
20120151493 | RELAY APPARATUS AND RELAY MANAGEMENT APPARATUS - A relay apparatus executes a reallocation process so as to transfer data received from an information processing apparatus allocated to the relay apparatus to a destination apparatus. The reallocation process includes the following operations. The relay apparatus determines reallocatability of the information processing apparatus on the basis of a status of receiving transfer data from the information processing apparatus. The reallocatability represents whether the information processing apparatus is reallocatable to another apparatus. The relay apparatus stores reallocatability information indicating the determined reallocatability in a storage unit. The relay apparatus determines whether to reallocate the information processing apparatus on the basis of the reallocatability information stored in the storage unit. The relay apparatus reallocates the information processing apparatus determined to be reallocated. | 06-14-2012 |
20120159502 | VARIABLE INCREMENT REAL-TIME STATUS COUNTERS - Processes, devices, and articles of manufacture having provisions to monitor and track multi-core Central Processor Unit resource allocation and deallocation in real-time are provided. The allocation and deallocation may be tracked by two counters with the first counter incrementing up or down depending upon the allocation or deallocation at hand, and with the second counter being updated when the first counter value meets or exceeds a threshold value. | 06-21-2012 |
20120159503 | WORK FLOW COMMAND PROCESSING SYSTEM - A method including receiving a work flow for the ingestion, transformation, and distribution of content, wherein the work flow includes one or more work unit tasks; selecting one of the one or more work unit tasks for execution when resources are available; retrieving work unit task information that includes a work unit definition that specifies which of the one or more other work unit tasks are capable of being at least one of an input to the one of the one or more work unit tasks or an output for the one of the one or more work unit tasks, and work unit task connector parameters that specify a type of input content and a type of output content; and executing the one of the one or more work unit tasks based on a translated work unit task information. | 06-21-2012 |
20120159504 | Mutual-Exclusion Algorithms Resilient to Transient Memory Faults - Techniques for implementing mutual-exclusion algorithms that are also fault-resistant are described herein. For instance, this document describes systems that implement fault-resistant, mutual-exclusion algorithms that at least prevent simultaneous access of a shared resource by multiple threads when (i) one of the multiple threads is in its critical section, and (ii) the other thread(s) are waiting in a loop to enter their respective critical sections. In some instances, these algorithms are fault-tolerant to prevent simultaneous access of the shared resource regardless of a state of the multiple threads executing on the system. In some instances, these algorithms may resist (e.g., tolerate entirely) transient memory faults (or “soft errors”). | 06-21-2012 |
20120159505 | Resilient Message Passing Applications - A message passing system may execute a parallel application on multiple compute nodes. Each compute node may perform a single workload on at least two physical computing resources. Messages may be passed from one compute node to another, and each physical computing resource assigned to a compute node may receive and process the messages. In some embodiments, the compute nodes may be virtualized so that a message passing system may only detect a single compute node and not the multiple underlying physical computing resources. | 06-21-2012 |
20120159506 | SCHEDULING AND MANAGEMENT IN A PERSONAL DATACENTER - A personal datacenter system is described herein that provides a framework for leveraging multiple heterogeneous computers in a dynamically changing environment together as an ad-hoc cluster for performing parallel processing of various tasks. A home environment is much more heterogeneous and dynamic than a typical datacenter, and typical datacenter scheduling strategies do not work well for these types of small clusters. Machines in a home are likely to be powered on and off, be removed and taken elsewhere, and be connected by an ad-hoc network topology with a mix of wired and wireless technologies. The personal data center system provides components to overcome these differences. The system identifies a dynamically available set of machines, characterizes their performance, discovers the network topology, and monitors the available communications bandwidth between machines. This information is then used to compute an efficient execution plan for data-parallel and/or High Performance Computing (HPC)-style applications. | 06-21-2012 |
20120159507 | COMPILING APPARATUS AND METHOD OF A MULTICORE DEVICE - An apparatus and method capable of reducing idle resources in a multicore device and improving the use of available resources in the multicore device are provided. The apparatus includes a static scheduling unit configured to generate one or more task groups, and to allocate the task groups to virtual cores by dividing or combining the tasks included in the task groups based on the execution time estimates of the task groups. The apparatus also includes a dynamic scheduling unit configured to map the virtual cores to physical cores. | 06-21-2012 |
20120159508 | TASK MANAGEMENT SYSTEM, TASK MANAGEMENT METHOD, AND PROGRAM - A task management system includes a capacity information acquisition section which acquires, from a computation device which executes a computation using electrical power derived from renewable energy, capacity information which shows the computation capacity of the computation device which is predicted based on weather information of a region where the computation device is disposed, and a task management section which allocates a computation task to a plurality of the computation devices based on the capacity information which is acquired from the plurality of computation devices using the capacity information acquisition section. | 06-21-2012 |
20120159509 | LANDSCAPE REORGANIZATION ALGORITHM FOR DYNAMIC LOAD BALANCING - A method and system for reorganizing a distributed computing landscape for dynamic load balancing is presented. A method includes the steps of collecting information about resource usage by a plurality of hosts in a distributed computing system, and generating a target distribution of the resource usage for the distributed computing system. The method further includes the step of generating an estimate of an improvement of the resource usage according to a reorganization plan. | 06-21-2012 |
20120167111 | RESOURCE DEPLOYMENT BASED ON CONDITIONS - Architecture that facilitates the package partitioning of application resources based on conditions, and the package applicability based on the conditions. An index is created for a unified lookup of the available resources. At build time of an application, the resources are indexed and determined to be applicable based on the conditions. The condition under which the resource is applicable is then used to automatically partition the resource into an appropriate package. Each resource package then becomes applicable under the conditions in which the resources within it are applicable, and is deployed to the user if the user merits the conditions (e.g., an English user will receive an English package of English strings, but not a French package). Before the application is run, the references to the resources are merged and can be used to do appropriate lookup of what resources are available. | 06-28-2012 |
20120167112 | Method for Resource Optimization for Parallel Data Integration - For optimizing resources for a parallel data integration job, a job request is received, which specifies a parallel data integration job to deploy in a grid. Grid resource utilizations are predicted for hypothetical runs of the specified job on respective hypothetical grid resource configurations. This includes automatically predicting grid resource utilizations by a resource optimizer module responsive to a model based on a plurality of actual runs of previous jobs. A grid resource configuration is selected for running the parallel data integration job, which includes the optimizer module automatically selecting a grid resource configuration responsive to the predicted grid resource utilizations and an optimization criterion. | 06-28-2012 |
20120167113 | VARIABLE INCREMENT REAL-TIME STATUS COUNTERS - Processes, devices, and articles of manufacture having provisions to monitor and track multi-core Central Processor Unit resource allocation and deallocation in real-time are provided. The allocation and deallocation may be tracked by two counters with the first counter incrementing up or down depending upon the allocation or deallocation at hand, and with the second counter being updated when the first counter value meets or exceeds a threshold value. | 06-28-2012 |
20120174112 | APPLICATION RESOURCE SWITCHOVER SYSTEMS AND METHODS - Registry information systems and methods are presented. In one embodiment, an application resource switchover method comprises receiving a switchover indication wherein the switchover indication includes an indication to switchover execution of at least one service of an application running on a primary system resource to running on a secondary system resource; performing a switchover preparation process, wherein the switchover preparation process includes automatically generating a switchover plan including indications of switchover operations for performance of a switchover process; and performing the switchover process in which the at lease one of the application services is brought up on the secondary system resource in accordance with the plan of switchover operations. In one embodiment, automatically generating a plan of switchover operations includes analyzing the switchover indication, wherein the analyzing includes determining a type of switchover corresponding to the switchover indication. There can be a variety of switchover types (e.g., a migration switchover, a recovery switchover, etc.). | 07-05-2012 |
20120174113 | TENANT VIRTUALIZATION CONTROLLER FOR A MULTI-TENANCY ENVIRONMENT - A system and method for performing load balancing of systems in a multi-tenancy computing environment by shifting tenants from an overloaded system to a non-overloaded system. Initially, a determination is made as to whether a first tenant desires an access to an instance of a software application. The same instance of the software application is being accessed by other tenants of a first system. If the tenant desires access to the same instance of the software application, the tenant is created at the first system. The created first tenant and the other tenants exist in a multi-tenancy computing environment that enables the first tenant and the other tenants to access a same instance of a software application. Then, it is checked whether the first system is overloaded. If the first system is overloaded, load balancing is performed as follows. The first tenant is exported from the overloaded first system to a lesser loaded second system. The data containers of the first tenant remain stationary at a virtual storage. The first tenant is enabled to access the same instance of the software application that it was accessing while at the first system, but now using memory resources and processing resources of the second system. Related apparatus, systems, techniques and articles are also described. | 07-05-2012 |
20120174114 | METHOD OF CALCULATING PROCESSOR UTILIZATION RATE IN SMT PROCESSOR - The method of calculating the processor utilization for each of logical processors in a computer, including the steps of: dividing the computation interval in which the processor utilization by each logical processor is to be calculated into a single task mode (ST) execution interval and a multitask mode (MT) execution interval, appropriately calculating them based on the before-and-after relation between two times; and adding the MT execution interval multiplied by a predetermined MT mode processor resource assignment ratio to the ST mode execution interval to obtain the processor utilization for the calculation-targeted logical processor in the computation interval. | 07-05-2012 |
20120174115 | RUNTIME ENVIRONMENT FOR VIRTUALIZING INFORMATION TECHNOLOGY APPLIANCES - A system for virtualizing information technology (IT) appliances can include an IT appliance hosting facilities software. The IT appliance hosting facilities software can be implemented at a layer of abstraction above a virtual machine host, which is implemented in a layer of abstraction above a hardware layer of a computing system. The IT appliance hosting facilities software can include programmatic code functioning as virtualized hardware upon which a set of IT appliance software modules are able to concurrently run. The IT appliance hosting facilities software can provide caching, application level security, and a standardized framework for running the IT appliance software modules, which are configured in conformance with the standardized framework. | 07-05-2012 |
20120174116 | HIGH PERFORMANCE LOCKS - Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component. | 07-05-2012 |
20120180061 | Organizing Task Placement Based On Workload Characterizations - Task placement is influenced within a multiple processor computer. Tasks are classified as either memory bound or CPU bound by observing certain performance counters over the task execution. During a first pass of task load balance, tasks are balanced across various CPUs to achieve a fairness goal, where tasks are allocated CPU resources in accordance to their established fairness priority value. During a second pass of task load balance, tasks are rebalanced across CPUs to reduce CPU resource contention, such that the rebalance of tasks in the second pass does not violate fairness goals established in the first pass. In one embodiment, the second pass could involve re-balancing memory bound tasks across different cache domains, where CPUs in a cache domain share a same last mile CPU cache such as an L3 cache. In another embodiment, the second pass could involve re-balancing CPU bound tasks across different CPU domains of a cache domain, where CPUs in a CPU domain could be sharing some or all of CPU execution unit resources. The two passes could be executed at different frequencies. | 07-12-2012 |
20120180062 | System and Method for Controlling Excessive Parallelism in Multiprocessor Systems - Execution of a computer program on a multiprocessor system is monitored to detect possible excess parallelism causing resource contention and the like and, in response, to controllably limit the number of processors applied to parallelize program components. | 07-12-2012 |
20120180063 | Method and Apparatus for Providing Management of Parallel Library Implementation - A method for providing management of parallel library implementations relative to available resources may include receiving an indication of a registration of a parallel library and determining processor utilization information based on current load conditions. The processor utilization information may be indicative of a number of processors to be made available to the parallel library for a process associated with the parallel library. The method may further include causing provision of the processor utilization information to the parallel library. A corresponding apparatus is also provided. | 07-12-2012 |
20120180064 | CENTRALIZED PLANNING FOR REAL-TIME SELF TUNING OF PLANNED ACTIONS IN A DISTRIBUTED ENVIRONMENT - Automatic programming, scheduling, and control of planned activities at “worker nodes” in a distributed environment are provided by a “real-time self tuner” (RTST). The RTST provides self-tuning of controlled interoperation among an interconnected set of distributed components (i.e., worker nodes) including, for example, home appliances, security systems, lighting, sensor networks, medical electronic devices, wearable computers, robotics, industrial controls, wireless communication systems, audio nets, distributed computers, toys, games, etc. The RTST acts as a centralized “planner” that is either one of the nodes or a dedicated computing device. A set of protocols allow applications to communicate with the nodes, and allow one or more nodes to communicate with each other. Self-tuning of the interoperation and scheduling of tasks to be performed at each node uses an on-line sampling driven statistical model and predefined node “behavior patterns” to predict and manage resource requirements needed by each node for completing assigned tasks. | 07-12-2012 |
20120180065 | METHODS AND APPARATUS FOR DETECTING DEADLOCK IN MULTITHREADING PROGRAMS - A method of detecting deadlock in a multithreading program is provided. An invocation graph is constructed having a single root and a plurality of nodes corresponding to one or more functions written in code of the multithreading program. A resource graph is computed in accordance with one or more resource sets in effect at each node of the invocation graph. It is determined whether cycles exist between two or more nodes of the resource graph. A cycle is an indication of deadlock in the multithreading program. | 07-12-2012 |
20120185863 | METHODS FOR RESTRICTING RESOURCES USED BY A PROGRAM BASED ON ENTITLEMENTS - In response to a request for launching a program, a list of one or more application frameworks to be accessed by the program during execution of the program is determined. Zero or more entitlements representing one or more resources entitled by the program during the execution are determined. A set of one or more rules based on the entitlements of the program is obtained from at least one of the application frameworks. The set of one or more rules specifies one or more constraints of resources associated with the at least one application framework. A security profile is dynamically compiled for the program based on the set of one or more rules associated with the at least one application framework. The compiled security profile is used to restrict the program from accessing at least one resource of the at least one application frameworks during the execution of the program. | 07-19-2012 |
20120185864 | Integrated Environment for Execution Monitoring and Profiling of Applications Running on Multi-Processor System-on-Chip - There is provided a system and method for providing an integrated environment for execution monitoring and profiling of applications running on multi-processor system-on-chips. There is provided a method comprising obtaining task execution data of an application, the task execution data including a plurality of task executions assigned to a plurality of hardware resources, showing a scheduler view of the plurality of task executions on a display, receiving a modification request for a selected task execution from the plurality of task executions, reassigning the plurality of task executions to the plurality of hardware resources based on implementing the modification request, and updating the scheduler view on the display. As a result, the high level results of specific low level optimizations may be tested and retried to discover which optimization routes provide the greatest benefits. | 07-19-2012 |
20120185865 | MANAGING ACCESS TO A SHARED RESOURCE IN A DATA PROCESSING SYSTEM - Processes requiring access to shared resources are adapted to issue a reservation request, such that a place in a resource access queue, such as one administered by means of a semaphore system, can be reserved for the process. The reservation is issued by a Reservation Management module at a time calculated to ensure that the reservation reaches the head of the queue as closely as possible to the moment at which the process actually needs access to the resource. The calculation may be made on the basis of priority information concerning the process itself, and statistical information gathered concerning historical performance of the queue. | 07-19-2012 |
20120185866 | SYSTEM AND METHOD FOR MANAGING THE INTERLEAVED EXECUTION OF THREADS - A computer system for managing the execution of threads including at least one central processing unit which performs interleaved execution of a plurality of threads throughout a plurality of virtual processors from said same central processing unit, and a handler for distributing the execution of the threads throughout the virtual processors. | 07-19-2012 |
20120192198 | Method and System for Memory Aware Runtime to Support Multitenancy in Heterogeneous Clusters - The invention solves the problem of sharing many-core devices (e.g. GPUs) among concurrent applications running on heterogeneous clusters. In particular, the invention provides transparent mapping of applications to many-core devices (that is, the user does not need to be aware of the many-core devices present in the cluster and of their utilization), time-sharing of many-core devices among applications also in the presence of conflicting memory requirements, and dynamic binding/binding of applications to/from many-core devices (that is, applications do not need to be statically mapped to the same many-core device for their whole life-time). | 07-26-2012 |
20120192199 | RESOURCE ALLOCATION DURING WORKLOAD PARTITION RELOCATION - A method of relocating a workload partition (WPAR) from a departure logical partition (LPAR) to an arrival LPAR determines an amount of a resource allocated to the relocating WPAR on the departure LPAR and allocates to the relocating WPAR on the arrival LPAR an amount of the resource substantially equal to the amount of the resource allocated to the relocating WPAR on the departure LPAR. | 07-26-2012 |
20120198465 | System and Method for Massively Multi-Core Computing Systems - A system and method for massively multi-core computing are provided. A method for computer management includes determining if there is a need to allocate at least one first resource to a first plane. If there is a need to allocate at least one first resource, the at least one first resource is selected from a resource pool based on a set of rules and allocated to the first plane. If there is not a need to allocate at least one first resource, it is determined if there is a need to de-allocate at least one second resource from a second plane. If there is a need to de-allocate at least one second resource, the at least one second resource is de-allocated. The first plane includes a control plane and/or a data plane and the second plane includes the control plane and/or the data plane. The resources are unchanged if there is not a need to allocate at least one first resource and if there is not a need to de-allocate at least one second resource. | 08-02-2012 |
20120198466 | DETERMINING AN ALLOCATION OF RESOURCES FOR A JOB - A job profile describes characteristics of a job. A performance parameter is calculated based on the job profile, and using a value of the performance parameter, an allocation of resources is determined to assign to the job to meet a performance goal associated with a job. | 08-02-2012 |
20120198467 | System and Method for Enforcing Future Policies in a Compute Environment - A disclosed system receives a request for resources, generates a credential map for each credential associated with the request, the credential map including a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all the generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map. | 08-02-2012 |
20120198468 | METHOD AND SYSTEM FOR COMMUNICATING BETWEEN ISOLATION ENVIRONMENTS - A method and system for aggregating installation scopes within an isolation environment, where the method includes first defining an isolation environment for encompassing an aggregation of installation scopes. Associations are created between a first application and a first installation scope. When the first application requires the presence of a second application within the isolation environment for proper execution, an image of the required second application is mounted onto a second installation scope and an association between the second application and the second installation scope is created. Another association is created between the first installation scope and the second installation scope, an this third association is created within a third installation scope. Each of the first, second, and third installation scopes are stored and the first application is launched into the defined isolation environment. | 08-02-2012 |
20120198469 | Method for Managing Hardware Resources Within a Simultaneous Multi-Threaded Processing System - A method for managing hardware resources and threads within a data processing system is disclosed. Compilation attributes of a function are collected during and after the compilation of the function. The pre-processing attributes of the function are also collected before the execution of the function. The collected attributes of the function are then analyzed, and a runtime configuration is assigned to the function based of the result of the attribute analysis. The runtime configuration may include, for example, the designation of the function to be executed under either a single-threaded mode or a simultaneous multi-threaded mode. During the execution of the function, real-time attributes of the function are being continuously collected. If necessary, the runtime configuration under which the function is being executed can be changed based on the real-time attributes collected during the execution of the function. | 08-02-2012 |
20120204186 | PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization. | 08-09-2012 |
20120210328 | Guarded, Multi-Metric Resource Control for Safe and Efficient Microprocessor Management - A mechanism is provided for guarded, multi-metric resource control. Monitoring is performed for an intended action to address a negative condition from a resource manager in a plurality of resource managers in the data processing system. Responsive to receiving the intended action, a determination is made as to whether the intended action will cause an additional negative condition within the data processing system. Responsive to determining that the intended action will cause the additional negative condition within the data processing system, at least one alternative action is identified to be implemented in the data processing system that addresses the negative condition while not causing any additional negative condition. The at least one alternative action is then implemented in the data processing system. | 08-16-2012 |
20120210329 | STORAGE SYSTEM AND METHOD FOR CONTROLLING THE SAME - Optimum load distribution processing is selected and executed based on settings made by a user in consideration of load changes caused by load distribution in a plurality of asymmetric cores, by using: a controller having a plurality of cores, and configured to extract, for each LU, a pattern showing the relationship between a core having an LU ownership and a candidate core as an LU ownership change destination based on LU ownership management information; to measure, for each LU, the usage of a plurality of resources; to predicate, for each LU based on the measurement results, a change in the usage of the plurality of resources and overhead to be generated by transfer processing itself; to select, based on the respective prediction results, a pattern that matches the user's setting information; and to transfer the LU ownership to the core belonging to the selected pattern. | 08-16-2012 |
20120210330 | Executing A Distributed Java Application On A Plurality Of Compute Nodes - Methods, systems, and products are disclosed for executing a distributed Java application on a plurality of compute nodes. The Java application includes a plurality of jobs distributed among the plurality of compute nodes. The plurality of compute nodes are connected together for data communications through a data communication network. Each of the plurality of compute nodes has installed upon it a Java Virtual Machine (‘JVM’) capable of supporting at least one job of the Java application. Executing a distributed Java application on a plurality of compute nodes includes: tracking, by an application manager, a just-in-time (‘JIT’) compilation history for the JVMs installed on the plurality of compute nodes; and configuring, by the application manager, the plurality of jobs for execution on the plurality of compute nodes in dependence upon the JIT compilation history for the JVMs installed on the plurality of compute nodes. | 08-16-2012 |
20120210331 | PROCESSOR RESOURCE CAPACITY MANAGEMENT IN AN INFORMATION HANDLING SYSTEM - An operating system or virtual machine of an information handling system (IHS) initializes a resource manager to provide processor resource utilization management during workload or application execution. The resource manager captures short term interval (STI) and long term interval (LTI) processor resource utilization data and stores that utilization data within an information store of the virtual machine. If a capacity on demand mechanism is enabled, the resource manager modifies a reserved capacity value. The resource manager selects previous STI and LTI values for comparison with current resource utilization and may apply a safety margin to generate a reserved capacity or target resource utilization value for the next short term interval (STI). The hypervisor may modify existing virtual processor allocation to match the target resource utilization. | 08-16-2012 |
20120216209 | VISUALIZATION-CENTRIC PERFORMANCE-BASED VOLUME ALLOCATION - A method, system, and computer program product for visualization-centric performance-based volume allocation in a data storage system using a processor in communication with a memory device is provided. A unified resource graph representative of a global hierarchy of storage components in the data storage system, including each of a plurality of storage controllers, is generated. The unified resource graph includes a common root node and a plurality of subtree nodes corresponding to each of a plurality of nodes internal to the plurality of storage controllers. The common root node and the plurality of subtree nodes are ordered in a top-down orientation. Scalable volume provisioning of an existing or new workload amount by graphical manipulation of at least one of the storage components represented by the unified resource graph is performed based on an input. | 08-23-2012 |
20120216210 | PROCESSOR WITH RESOURCE USAGE COUNTERS FOR PER-THREAD ACCOUNTING - Processor time accounting is enhanced by per-thread internal resource usage counter circuits that account for usage of processor core resources to the threads that use them. Relative resource use can be determined by detecting events such as instruction dispatches for multiple threads active within the processor, which may include idle threads that are still occupying processor resources. The values of the resource usage counters are used periodically to determine relative usage of the processor core by the multiple threads. If all of the events are for a single thread during a given period, the processor time is allocated to the single thread. If no events occur in the given period, then the processor time can be equally allocated among threads. If multiple threads are generating events, a fractional resource usage can be determined for each thread and the counters may be updated in accordance with their fractional usage. | 08-23-2012 |
20120216211 | AUTHENTICATING A PROCESSING SYSTEM ACCESSING A RESOURCE - Provided are a method, system, and article of manufacture for authenticating a processing system accessing a resource. An association of processing system identifiers with resources, including a first and second resources, is maintained. A request from a requesting processing system in a host is received for use of a first resource that provides access to a second resource, wherein the request is generated by processing system software and wherein the request further includes a submitted processing system identifier included in the request by host hardware in the host. A determination is made as to whether the submitted processing system identifier is one of the processing system identifiers associated with the first and second resources. The requesting processing system is provided access to the first resource that the processing system uses to access the second resource. | 08-23-2012 |
20120216212 | ASSIGNING A PORTION OF PHYSICAL COMPUTING RESOURCES TO A LOGICAL PARTITION - A computer implemented method includes determining first characteristics of a first logical partition, the first characteristics including a memory footprint characteristic. The method includes assigning a first portion of a first set of physical computing resources to the first logical partition. The first set of physical computing resources includes a plurality of processors that includes a first processor having a first processor type and a second processor having a second processor type. The first portion includes the second processor. The method includes dispatching the first logical partition to execute using the first portion. The method includes creating a second logical partition that includes the second processor and assigning a second portion of the first set of physical computing resources to the second logical partition. The method includes dispatching the second logical partition to execute using the second portion. | 08-23-2012 |
20120216213 | ELECTRONIC CONTROL UNIT HAVING A REAL-TIME CORE MANAGING PARTITIONING - An electronic control unit having a microcontroller provided with RAM associated with variable data and ROM associated with the code of a software operating system incorporating a real time core for executing computer tasks. The RAM and ROM include zones corresponding to partitions, one of which is allocated to the real time core, while each of the others is allocated to at least one of the tasks. The RAM and the ROM are associated with an address bus that is physically programmed so that each partition is prevented firstly from writing in another one of the zones of the RAM, and secondly from executing another one of the zones of the ROM. The he real time core is associated with a timer for allocating an execution time to each partition. | 08-23-2012 |
20120222037 | DYNAMIC REPROVISIONING OF RESOURCES TO SOFTWARE OFFERINGS - The disclosed embodiments provide a system that facilitates the maintenance and execution of a software offering. During operation, the system obtains a policy change associated with a service definition of the software offering. Next, the system updates one or more requirements associated with the software offering based on the policy change. Finally, the system uses the updated requirements to dynamically reprovision one or more resources for use by the software offering during execution of the software offering. | 08-30-2012 |
20120222038 | TASK DEFINITION FOR SPECIFYING RESOURCE REQUIREMENTS - Task definitions are used by a task scheduler and prioritizer to allocate task operations to a plurality of processing units. The task definition is an electronic record that specifies resources needed by, and other characteristics of, a task to be executed. Resources include types of processing nodes desired to execute the task, needed amount or rate of processing cycles, amount of memory capacity, number of registers, input/output ports, buffer sizes, etc. Characteristics of a task in clued maximum latency time, frequency of execution of a task, communication ports, and other characteristics. An examplary task definition language and syntax is described that uses constructs including order of attempted scheduling operations, percentage or amount of resources desired by different operations, handling of multiple executable images or modules, overlays, port aliases and other features. | 08-30-2012 |
20120222039 | Resource Data Management - A set of data structures defines resource relationships and locations for a set of resources to form defined resource relationships and defined locations for the set of resources. A receiver obtains, from an unsecure device, replaceable unit data and characterization data for a current resource in the set of resources. A writer merges obtained replaceable unit data for a current resource with obtained characterization data for the current resource for each resource of the set of resources to form a set of data files. | 08-30-2012 |
20120222040 | RESOURCE MANAGEMENT SYSTEM, RESOURCE INFORMATION PROVIDING METHOD AND PROGRAM - [Object] To provide a resource management system capable of stably providing most recently updated resource information at a high speed. | 08-30-2012 |
20120227051 | Composite Contention Aware Task Scheduling - A mechanism is provided for composite contention aware task scheduling. The mechanism performs task scheduling with shared resources in computer systems. A task is a group of instructions. A compute task is a group of compute instructions. A memory task, also referred to as a communication task, may be a group of load/store operations, for example. The mechanism performs composite contention-aware scheduling that considers the interaction among compute tasks, communication tasks, and application threads that include compute and communication tasks. The mechanism performs a composite of memory task throttling and application thread throttling. | 09-06-2012 |
20120227052 | Task launching on hardware resource for client - A system includes a client management component, a monitor component, and a hardware resource component, each of which is implemented in hardware. The client management component chooses a selected client from one or more clients for which a given task is to be fulfilled by a selected hardware resource of one or more hardware resources. The monitor component receives the given task and an identifier of the selected client from the client management component and monitors completion of the given task for the selected client by the selected hardware resource. The hardware resource management receives the given task from the monitor component, chooses the selected hardware resource that is to fulfill the given task, and launches the given task on the selected hardware resource. | 09-06-2012 |
20120227053 | DISTRIBUTED RESOURCE MANAGEMENT IN A PORTABLE COMPUTING DEVICE - In a portable computing device having a node-based resource architecture, a first or distributed node controlled by a first processor but corresponding to a second or native node controlled by a second processor is used to indirectly access a resource of the second node. In a resource graph defining the architecture each node represents an encapsulation of functionality of one or more resources, each edge represents a client request, and adjacent nodes represent resource dependencies. Resources defined by a first graph are controlled by the first processor but not the second processor, while resources defined by a second graph are controlled by the second processor but not the first processor. A client request on the first node may be received from a client under control of the first processor. Then, a client request may be issued on the second node in response to the client request on the first node. | 09-06-2012 |
20120227054 | SYSTEM AND METHOD OF INTERFACING A WORKLOAD MANAGER AND SCHEDULER WITH AN IDENTITY MANAGER - A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers. | 09-06-2012 |
20120240124 | Performing An Operation Using Multiple Services - Some embodiments provide a method for distributing an operation for processing by a set of background services. The method automatically determines a number of background services for performing an operation. The method partitions the operation into several sub-operations. The method distributes the several sub-operations across the determined number of background services. | 09-20-2012 |
20120240125 | System Resource Management In An Electronic Device - A system and method of managing resources of an electronic device are described. A solver of the electronic device may receive one or more resource requirements from one or more resource requesters executing on the electronic device. The solver determines a values for resource characteristic based on the received resource requirements and dependency information defining hierarchical dependency between resource characteristic values associated with resources of the electronic device. The determined values of the resource characteristics are then provided to the associated resources of the electronic device. | 09-20-2012 |
20120240126 | Partitioned Ticket Locks With Semi-Local Spinning - A partitioned ticket lock may control access to a shared resource, and may include a single ticket value field and multiple grant value fields. Each grant value may be the sole occupant of a respective cache line, an event count or sequencer instance, or a sub-lock. The number of grant values may be configurable and/or adaptable during runtime. To acquire the lock, a thread may obtain a value from the ticket value field using a fetch-and-increment type operation, and generate an identifier of a particular grant value field by applying a mathematical or logical function to the obtained ticket value. The thread may be granted the lock when the value of that grant value field matches the obtained ticket value. Releasing the lock may include computing a new ticket value, generating an identifier of another grant value field, and storing the new ticket value in the other grant value field. | 09-20-2012 |
20120240127 | MATCHING AN AUTONOMIC MANAGER WITH A MANAGEABLE RESOURCE - A method to match an autonomic manager with a manageable resource may include using a management style profile to match the autonomic manager with the manageable resource. The method may also include validating that the autonomic manager can manage the manageable resource using a defined management style of the autonomic manager. | 09-20-2012 |
20120240128 | Memory Access Performance Diagnosis - There is disclosed a solution for obtaining Memory Access Performance metrics in an electronic system comprising a Data Processing Unit, DPU and a synchronous memory device external to the DPU and coupled to the DPU through a memory bus. There is used mixed software and hardware dedicated resources, wherein at least a hardware part of the dedicated resources is comprised in the memory device. | 09-20-2012 |
20120246660 | OPTIMIZED MULTI-COMPONENT CO-ALLOCATION SCHEDULING WITH ADVANCED RESERVATIONS FOR DATA TRANSFERS AND DISTRIBUTED JOBS - Disclosed are systems, methods, computer readable media, and compute environments for establishing a schedule for processing a job in a distributed compute environment. The method embodiment comprises converting a topology of a compute environment to a plurality of endpoint-to-endpoint paths, based on the plurality of endpoint-to-endpoint paths, mapping each replica resource of a plurality of resources to one or more endpoints where each respective resource is available, iteratively identifying schedule costs associated with a relationship between endpoints and resources, and committing a selected schedule cost from the identified schedule costs for processing a job in the compute environment. | 09-27-2012 |
20120254883 | DYNAMICALLY SWITCHING THE SERIALIZATION METHOD OF A DATA STRUCTURE - Embodiments of the invention comprise a method for dynamically switching a serialization method of a data structure. If use of the serialization mechanism is desired, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is in use, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is not in use, an instruction to access the data structure without obtaining the serialization mechanism is received. | 10-04-2012 |
20120254884 | DYNAMICALLY SWITCHING THE SERIALIZATION METHOD OF A DATA STRUCTURE - Embodiments of the invention comprise a method for dynamically switching a serialization method of a data structure. If use of the serialization mechanism is desired, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is in use, an instruction to obtain the serialization mechanism is received. If use of the serialization mechanism is not desired and if the serialization mechanism is not in use, an instruction to access the data structure without obtaining the serialization mechanism is received. | 10-04-2012 |
20120254885 | RUNNING A PLURALITY OF INSTANCES OF AN APPLICATION - Running of a root instance of an application is started. The root instance includes at least one thread. In response to determining that a thread of the root instance runs to a preset freezing point in the application, running of all threads of the root instance is stopped. In response to starting to run an additional instance of the application, a running state of all threads of the root instance is replicated as a running state of all threads of the additional instance of the application. Running all threads of the additional instance of the application is continued. | 10-04-2012 |
20120254886 | Reducing Overheads in Application Processing - A method, a system and a computer program of reducing overheads in multiple applications processing are disclosed. The method includes identifying resources interacting with each of the applications from a set of applications and grouping the applications from the set of applications, resulting in at least one application cluster, in response to the identified resources, wherein overheads associated with re-initialization of agents assigned to the identified resources are reduced. The method further includes assigning an agent corresponding to each of the identified resources and initializing the agent corresponding to each of the identified resources. The method further includes identifying parameters associated with the identified resources, pre-processing the identified parameters for each of the identified resources, and also includes selecting a clustering means for the clustering. | 10-04-2012 |
20120260258 | METHOD AND SYSTEM FOR DYNAMICALLY CONTROLLING POWER TO MULTIPLE CORES IN A MULTICORE PROCESSOR OF A PORTABLE COMPUTING DEVICE - A method and system for dynamically determining the degree of workload parallelism and to automatically adjust the number of cores (and/or processors) supporting a workload in a portable computing device are described. The method and system includes a parallelism monitor module that monitors the activity of an operating system scheduler and one or more work queues of a multicore processor and/or a plurality of central processing units (“CPUs”). The parallelism monitor may calculate a percentage of parallel work based on a current mode of operation of the multicore processor or a plurality of processors. This percentage of parallel work is then passed to a multiprocessor decision algorithm module. The multiprocessor decision algorithm module determines if the current mode of operation for the multicore processor (or plurality of processors) should be changed based on the calculated percentage of parallel work. | 10-11-2012 |
20120260259 | RESOURCE CONSUMPTION WITH ENHANCED REQUIREMENT-CAPABILITY DEFINITIONS - Enhanced requirement-capability definitions are employed for resource consumption and allocation. Business requirements can be specified with respect to content to be hosted, and a decision can be made as to whether, and how, to allocate resources for the content based on the business requirements and resource capabilities. Capability profiles can also be employed to hide underlying resource details while still providing information about resource capabilities. | 10-11-2012 |
20120266176 | Allocating Tasks to Machines in Computing Clusters - Allocating tasks to machines in computing clusters is described. In an embodiment a set of tasks associated with a job are received at a scheduler. In an embodiment an index can be computed for each combination of tasks and processors and stored in a lookup table. In an example the index may be include an indication of the preference for the task to be processed on a particular processor, an indication of a waiting time for the task to be processed and an indication of how other tasks being processed in the computing cluster may be penalized by assigning a task to a particular processor. In an embodiment tasks are assigned to a processor by accessing the lookup table, selecting a task for processing using the index and scheduling the selected task for allocation to a processor. | 10-18-2012 |
20120266177 | MANAGEMENT SYSTEM, COMPUTER SYSTEM INCLUDING THE MANAGEMENT SYSTEM, AND MANAGEMENT METHOD - The present invention provides a technique capable of improving use efficiency of storage devices. In this regard, a computer system of the present invention includes: a plurality of storage subsystems; an information processing apparatus coupled to the storage subsystems and including a virtual layer for virtually providing information from the storage subsystems; and the a management system that manages the plurality of storage subsystems and the information processing apparatus. The management system manages, on a memory, configuration information of logical volumes allocated to virtual instances managed on a virtual layer of the information processing apparatus and operation information of hardware resources included in the storage subsystems. The management system evaluates use efficiency of the virtual instances based on the configuration information of the logical volumes and the operation information of the hardware resources and outputs an evaluation result. | 10-18-2012 |
20120266178 | System Providing Resources Based on Licensing Contract with User by Correcting the Error Between Estimated Execution Time from the History of Job Execution - A network system includes an application service provider (ASP) which is connected to the Internet and executes an application, and a CPU resource provider which is connected to the Internet and provides a processing service to a particular computational part (e.g., computation intensive part) of the application, wherein: when requesting a job from the CPU resource provider, the application service provider (ASP) sends information about estimated computation time of the job to the CPU resource provider via the Internet; and the CPU resource provider assigns the job by correcting this estimated computation time based on the estimated computation time sent from the application service provider (ASP). | 10-18-2012 |
20120278811 | STREAM PROCESSING ON HETEROGENEOUS HARDWARE DEVICES - A stream processing execution engine evaluates development-time performance characteristic estimates in combination with run-time parameters to schedule execution of stream processing software components in a stack of a stream processing application that satisfy a defined performance criterion in a heterogeneous hardware device. A stream processing application includes a stack of interdependent stream processing software components. A stream processing execution engine evaluates one or more performance characteristics of multiple computational resources in the heterogeneous hardware device. Each performance characteristic is associated with performance of a computational resource in executing a computational-resource-dependent instance of a stream processing software component. The stream processing execution engine schedules within the run-time environment a computational resource on which to execute a computational-resource-dependent instance of one of the stream processing software components. The computational-resource-dependent instance is targeted for execution on the computational resource that satisfies a performance policy attributed to the stream processing software component. | 11-01-2012 |
20120278812 | TASK ASSIGNMENT IN CLOUD COMPUTING ENVIRONMENT - Technologies are generally described for a system and method for assigning a task in a cloud. In some examples, the method may include receiving a task request relating to a task and determining service related data relating to the task based on the task request. In some examples, the method may include receiving resource data relating to a first and second resource in the cloud. In some examples, the method may include determining a first correlation value between the task and the first resource and a second correlation value between the task and the second resource based on the service related data and the resource data. In some examples, the method may include assigning the task to the first resource based on the first and second correlation value. | 11-01-2012 |
20120284729 | PROCESSOR STATE-BASED THREAD SCHEDULING - Techniques for implementing processor state-based thread scheduling are described that improve processor performance or energy efficiency of a computing device. In one or more embodiments, a power configuration state of a processor is ascertained. The processor or another processor is selected to execute a thread based on the power configuration state of the processor. In other embodiments, power configuration states of processor cores are ascertained. Power configuration state criteria for the processor cores are defined based on the respective power configuration states. One of the processor cores is then selected based on the power configuration state criteria to execute a thread. | 11-08-2012 |
20120284730 | SYSTEM TO PROVIDE COMPUTING SERVICES - A system is provided. The system includes a computing device by which first and second commands are inputted, first and second resources disposed in communication with the computing device to be receptive of the first command and responsive to the first command with first and second energy demands in first and second response times, respectively and a managing unit. The managing unit is disposed in communication with the computing device to be receptive of the first and second commands and with the first and second resources to allocate tasks associated with the first command to one of the first and second resources. The tasks are allocated in accordance with the second command and the second command is based on the first and second energy demands and the first and second response times. | 11-08-2012 |
20120284731 | TWO-PASS LINEAR COMPLEXITY TASK SCHEDULER - A method for two-pass scheduling of a plurality of tasks generally including steps (A) to (C). Step (A) may assign each of the tasks to a corresponding one or more of a plurality of processors in a first pass through the tasks. The first pass may be non-iterative. Step (B) may reassign the tasks among the processors to shorten a respective load on one or more of the processors in a second pass through the tasks. The second pass may be non-iterative and may begin after the first pass has completed. Step (C) may generate a schedule in response to the assigning and the reassigning. The schedule generally maps the tasks to the processors. | 11-08-2012 |
20120284732 | Time-variant scheduling of affinity groups on a multi-core processor - Methods and systems for scheduling applications on a multi-core processor are disclosed, which may be based on association of processor cores, application execution environments, and authorizations that permits efficient and practical means to utilize the simultaneous execution capabilities provided by multi-core processors. The algorithm may support definition and scheduling of variable associations between cores and applications (i.e., multiple associations can be defined so that the cores an application is scheduled on can vary over time as well as what other applications are also assigned to the same cores as part of an association). The algorithm may include specification and control of scheduling activities, permitting preservation of some execution capabilities of a multi-core processor for future growth, and permitting further evaluation of application requirements against the allocated execution capabilities. | 11-08-2012 |
20120291039 | SYSTEM AND METHOD FOR MANAGING A RESOURCE - Systems and methods for managing a resource are disclosed. Resource may include vendors, suppliers, partners and the like. The systems allow users to conduct a weighted analysis of various resources and compare multiple resources on the same scale. Moreover, the systems are configured to grade various resources based on their strategic value to a business. This analysis and the resulting strategic value may be based on qualitative data provided by users and quantitative data captured from the business relationship between the business and the resource. | 11-15-2012 |
20120291040 | AUTOMATIC LOAD BALANCING FOR HETEROGENEOUS CORES - A system and method for efficient automatic scheduling of the execution of work units between multiple heterogeneous processor cores. A processing node includes a first processor core with a general-purpose micro-architecture and a second processor core with a single instruction multiple data micro-architecture. A computer program comprises one or more compute kernels, or function calls. A compiler computes pre-runtime information of the given function call. A runtime scheduler produces one or more work units by matching each of the one or more kernels with an associated record of data. The scheduler assigns work units either to the first or to the second processor core based at least in part on the computed pre-runtime information. In addition, the scheduler is able to change an original assignment for a waiting work unit based on dynamic runtime behavior of other work units corresponding to a same kernel as the waiting work unit. | 11-15-2012 |
20120291041 | ASSIGNING RESOURCES FOR TASKS - A processing subsystem has plural processing stages, where output of one of the plural processing stages is provided to another of the processing stages. Resources are dynamically assigned to the plural processing stages. | 11-15-2012 |
20120291042 | MINIMIZING RESOURCE LATENCY BETWEEN PROCESSOR APPLICATION STATES IN A PORTABLE COMPUTING DEVICE BY SCHEDULING RESOURCE SET TRANSITIONS - Resource state sets corresponding to the application states are maintained in memory. A request may be issued for a processor operating in a first application state corresponding to the first resource state set to transition to a second application state corresponding to the second resource state set. A start time to begin transitioning resources to states indicated in the second resource state set is scheduled based upon an estimated amount of processing time to complete transitioning. A process is begun by which the states of resources are switched from states indicated by the first resource state set to states indicated by the second resource state set. Scheduling the process to begin at a time that allows the process to be completed just in time for the resource states to be immediately available to the processor upon entering the second application state helps minimize adverse effects of resource latency. | 11-15-2012 |
20120291043 | Minimizing Resource Latency Between Processor Application States In A Portable Computing Device By Using A Next-Active State Set - Resource state sets of a portable computing device are managed. A sleep set of resource states, an active set of resource states and a next-active set of resource states are maintained in memory. A request may be issued for a processor to enter into a sleep state or otherwise change from one application state corresponding to one resource state set to another application state corresponding to another application state set. This causes a controller to review a trigger set to determine if a shut down condition for the processor matches one or more conditions listed in the trigger set. If a trigger set matches a shut down condition, then switching states of one or more resources in accordance with the sleep set may be made by the controller. Providing a next-awake set of resource states that is immediately available to the processor upon a wake-up event helps minimize resource latency. | 11-15-2012 |
20120297395 | SCALABLE WORK LOAD MANAGEMENT ON MULTI-CORE COMPUTER SYSTEMS - A system and method for managing the processing of work units being processed on a computer system having shared resources e.g. multiple processing cores, memory, bandwidth, etc. The system comprises a job scheduler for scheduling access to the shared resources for the work units, and an event trap for capturing resource related allocation events. The event trap is adapted to dynamically adjust the amount of availability associated with each shared resource identified by the resource related allocation event. The allocation event may define a resource release or a resource request. The event trap may increase the amount of availability for allocation events defining a resource release, and decrement the amount of availability for allocation events defining a resource request. The job scheduler allocates resources to the work units using a real time amount of availability of the shared resources in order to maximize a consumption of the shared resources. | 11-22-2012 |
20120297396 | INTERCONNECT STRUCTURE TO SUPPORT THE EXECUTION OF INSTRUCTION SEQUENCES BY A PLURALITY OF ENGINES - A global interconnect system. The global interconnect system includes a plurality of resources having data for supporting the execution of multiple code sequences and a plurality of engines for implementing the execution of the multiple code sequences. A plurality of resource consumers are within each of the plurality of engines. A global interconnect structure is coupled to the plurality of resource consumers and coupled to the plurality of resources to enable data access and execution of the multiple code sequences, wherein the resource consumers access the resources through a per cycle utilization of the global interconnect structure. | 11-22-2012 |
20120304188 | Scheduling Flows in a Multi-Platform Cluster Environment - Techniques for scheduling multiple flows in a multi-platform cluster environment are provided. The techniques include partitioning a cluster into one or more platform containers associated with one or more platforms in the cluster, scheduling one or more flows in each of the one or more platform containers, wherein the one or more flows are created as one or more flow containers, scheduling one or more individual jobs into the one or more flow containers to create a moldable schedule of one or more jobs, flows and platforms, and automatically converting the moldable schedule into a malleable schedule. | 11-29-2012 |
20120304189 | COMPUTER SYSTEM AND ITS CONTROL METHOD - It is an object of this invention to provide a computer system and its control method capable of preventing allocation of a resource(s), which is not intended by a superior administrator, to a certain storage administrator even when the superior administrator sets a certain authority to that storage administrator and intends to allocate a resource(s), which is required to enable this authority, to the storage administrator. | 11-29-2012 |
20120304190 | Intelligent Memory Device With ASCII Registers - An ASCII-based processing system is disclosed. A memory is divided into a plurality of logical partitions. Each partition has a range of memory addresses and includes information associated with a particular task. Task information includes contents of task state register and one or more task data registers, with each task data register having an ASCII name. Each task data register is successively labeled with a unique alphabetic character label starting with the character ‘A.’ A dataflow unit within the processing system is configured to manage a mapping between registers with ASCII names and the memory addresses of a particular task. Task instructions can include ASCII characters that indicate a request for resources and indicate the ASCII-character designated names of task data registers on which the task instruction operates. A processing element receiving the task instruction performs the operation indicated by the ASCII operator code on the indicated task data registers. | 11-29-2012 |
20120311597 | METHOD AND SYSTEM FOR INFINIBAND HOST CHANNEL ADAPTOR QUALITY OF SERVICE - A method for allocating resources of a host channel adapter includes the host channel adapter identifying an underlying function referenced in the first resource allocation request received from a virtual machine manager, determining that the first resource allocation request specifies a number of physical collect buffers (PCBs) allocated to the underlying function, allocating the number of PCBs to the underlying function, determining that the first resource allocation request specifies a number of virtual collect buffers (VCBs) allocated to the underlying function, and allocating the number of VCBs to the underlying function. The host channel adapter further receives command data for a command from the single virtual machine, determines that the underlying function has in use at least the number of PCBs when the command data is received, and drops the command data in the first command based on the underlying function having in use at least the number of PCBs. | 12-06-2012 |
20120311598 | RESOURCE ALLOCATION FOR A PLURALITY OF RESOURCES FOR A DUAL ACTIVITY SYSTEM - Exemplary method, system, and computer program product embodiments for resource allocation of a plurality of resources for a dual activity system by a processor device, are provided. In one embodiment, by way of example only, each of the activities may be started at a static quota. The resource boundary may be increased for a resource request for at least one of the dual activities until a resource request for an alternative one of the at least one of the dual activities is rejected. In response to the rejection of the resource request for the alternative one of the at least one of the dual activities, a resource boundary for the at least one of the dual activities may be reduced, and a wait after decrease mode may be commenced until a current resource usage is one of less than and equal to the reduced resource boundary. | 12-06-2012 |
20120311599 | INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD - According to one embodiment, an information processor includes processors of a plurality of types and a processing assignment module. The processing assignment module sequentially assigns basic modules to the processors if the processors are available based on the types of the processors. The type of a processor to which processing of each of the basic modules is preferentially assigned is specified in advance. | 12-06-2012 |
20120311600 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A workload that can be processed with a resource amount available in a physical server is estimated. An information processing apparatus 20 includes a performance information storage unit 25 that stores information indicating each of plural types of workloads and a resource amount of the physical server allocated to each of the workloads when the workloads are run in a physical server 30, in a manner to be associated with each other, an acquiring unit 21 that acquires a resource amount available in the physical server 30, a comparison unit 22 that selects at least one stored work load, and compares the available resource amount with the resource amount associated with the selected workload, and a first extraction unit 23 that extracts the selected workload if the compared resource amount is less than or equal to the available resource amount. | 12-06-2012 |
20120311601 | Method and apparatus for implementing task-process-table based hardware control - Disclosed is a method for implementing task-process-table based hardware control, which includes dividing a task that has to be implemented by a hardware circuit into multiple sub-processes, and determining the depth of the task process table according to the number of the sub-processes; according to the control information of the hardware unit corresponding to each sub-process and the number (SPAN) of clock cycles occupied by hardware processing for the sub-process, determining the bit width of the task process table and generating the task process table; starting the hardware unit corresponding to each sub-process in an order of the sub-processes, under the control of the control information in the task process table, and completing the processing of each sub-process. A device for implementing hardware control is also disclosed. The disclosure enables precise control of the hardware control flow and is of versatility. For the hardware implementation of a task with a complex algorithm flow, the data processing flow is accurate, and the development efficiency is improved. | 12-06-2012 |
20120317578 | Scheduling Execution of Complementary Jobs Based on Resource Usage - The subject disclosure is directed towards executing jobs based on resource usage. When a plurality of jobs is received, one or more jobs are mapped to one or more other jobs based on which resources are fully utilized or overloaded. The utilization of these resources by the one or more jobs complements utilization of these resources by the one or more other jobs. The resources are partitioned at one or more servers in order to efficiently execute the one or more jobs and the one or more other jobs. The resources may be partitioned equally or proportionally based on the resource usage or priorities. | 12-13-2012 |
20120317579 | SYSTEM AND METHOD FOR PERFORMING DISTRIBUTED PARALLEL PROCESSING TASKS IN A SPOT MARKET - As a result of the systems and methods described herein, an alternative MapReduce implementation is provided which monitors for impending termination notices, and allows dynamic checkpointing and storing of processed portions of a map task, such that any processing which is interrupted by large scale terminations of a plurality of computing devices—such as those resulting from spot market rate fluctuations—is preserved. | 12-13-2012 |
20120317580 | Apportioning Summarized Metrics Based on Unsummarized Metrics in a Computing System - A method for apportioning summarized metrics based on unsummarized metrics in a computing system includes receiving, by a memory device of the computing system, a log file, the log file comprising unsummarized metrics, the unsummarized metrics being related to a plurality of transactions performed by a program in the computing system, and a summarized metric, the summarized metric being related to the program, wherein the summarized metric comprises accumulated data from the plurality of transactions; selecting an unsummarized metric that reflects a distribution of the summarized metric among the plurality of transactions by a processing device of the computing system; and determining an amount of the summarized metric that belongs to a transaction of the plurality of transactions based on the selected unsummarized metric by the processing device of the computing system. | 12-13-2012 |
20120317581 | MANAGEMENT OF COPY SERVICES RELATIONSHIPS VIA POLICIES SPECIFIED ON RESOURCE GROUPS - At least one additional resource group attribute is defined to specify at least one policy prescribing a copy services relationship between two of the storage resources. Pursuant to a request to establish the copy services relationship between the two storage resources, each of the two storage resources exchange resource group labels corresponding to which of the plurality of resource groups the two storage resources are assigned, and each of the two storage resources validates the requested copy services relationship and the resource group label of an opposing one of the two storage resources against the individual ones of the at least one additional resource group attribute in the resource group object to determine if the copy services relationship may proceed. | 12-13-2012 |
20120317582 | Composite Contention Aware Task Scheduling - A mechanism is provided for composite contention aware task scheduling. The mechanism performs task scheduling with shared resources in computer systems. A task is a group of instructions. A compute task is a group of compute instructions. A memory task, also referred to as a communication task, may be a group of load/store operations, for example. The mechanism performs composite contention-aware scheduling that considers the interaction among compute tasks, communication tasks, and application threads that include compute and communication tasks. The mechanism performs a composite of memory task throttling and application thread throttling. | 12-13-2012 |
20120317583 | HIGHLY RELIABLE AND SCALABLE ARCHITECTURE FOR DATA CENTERS - The present invention provides a highly reliable and scalable architecture for data centers. Work to be performed is divided into discrete work units. The work units are maintained in a pool of work units that may be processed by any number of different servers. A server may extract an eligible work unit and attempt to process it. If the processing of the work unit succeeds, the work unit is tagged as executed and becomes ineligible for other servers. If the server fails to execute the work unit for some reason, the work unit becomes eligible again and another server may extract and execute it. A server extracts and executes work units when they have available resources. This leads to the automatic load balancing of the data center. | 12-13-2012 |
20120324464 | PRODUCT-SPECIFIC SYSTEM RESOURCE ALLOCATION WITHIN A SINGLE OPERATING SYSTEM INSTANCE - Resource constraints for a group of individual application products to be configured for shared resource usage of at least one shared resource within a single operating system instance are analyzed by a resource allocation module. An individual resource allocation for each of the group of individual application products is determined based upon the analyzed resource constraints for the group of individual application products. The determined individual resource allocation for each of the group of individual application products is implemented within the single operating system instance using local inter-product message communication bindings by the single operating system instance. | 12-20-2012 |
20120324465 | WORK ITEM PROCESSING IN DISTRIBUTED APPLICATIONS - A system for organizing messages related to tasks in a distributed application is disclosed. The system includes a work-list creator to create a work list of the top-level work items to be accomplished in performing a task. Work-item processors are distributed in the system. The work-item processors process the top-level work item included in a task and also append additional work items to the work list. A work-list scheduler invokes the work-item processors so local work-item processors are invoked prior to remote work-item processors. | 12-20-2012 |
20120324466 | Scheduling Execution Requests to Allow Partial Results - The subject disclosure is directed towards scheduling requests using quality values that are defined for partial responses to the requests. For each request in a queue, an associated processing time is determined using a system load and/or the quality values. The associated processing time is less than or equal to a service demand, which represents an amount of time to produce a complete response. | 12-20-2012 |
20120324467 | COMPUTING JOB MANAGEMENT BASED ON PRIORITY AND QUOTA - In one embodiment, the invention provides a method of managing a computing job based on a job priority and a submitter quota, the method including determining whether a declared priority of a computing job exceeds a predetermined declared priority quota of a submitter; in the case that the declared priority of the computing job exceeds the predetermined declared priority of the submitter, substituting a reduced priority for the declared priority of the computing job; determining whether the reduced priority of the computing job exceeds a predetermined reduced priority quota for the submitter; and in the case that the reduced priority of the computing job does not exceed the predetermined reduced priority quota of the submitter, assigning the computing job to at least one computer resource at the reduced priority. | 12-20-2012 |
20120324468 | PRODUCT-SPECIFIC SYSTEM RESOURCE ALLOCATION WITHIN A SINGLE OPERATING SYSTEM INSTANCE - Resource constraints for a group of individual application products to be configured for shared resource usage of at least one shared resource within a single operating system instance are analyzed by a resource allocation module. An individual resource allocation for each of the group of individual application products is determined based upon the analyzed resource constraints for the group of individual application products. The determined individual resource allocation for each of the group of individual application products is implemented within the single operating system instance using local inter-product message communication bindings by the single operating system instance. | 12-20-2012 |
20120324469 | RESOURCE ALLOCATION APPARATUS, RESOURCE ALLOCATION METHOD, AND COMPUTER READABLE MEDIUM - A parameter determination unit | 12-20-2012 |
20120324470 | SYSTEM AND METHOD FOR DYNAMIC RESCHEDULING OF MULTIPLE VARYING RESOURCES WITH USER SOCIAL MAPPING - A system and method for scheduling resources includes a memory storage device having a resource data structure stored therein which is configured to store a collection of available resources, time slots for employing the resources, dependencies between the available resources and social map information. A processing system is configured to set up a communication channel between users, between a resource owner and a user or between resource owners to schedule users in the time slots for the available resources. The processing system employs social mapping information of the users or owners to assist in filtering the users and owners and initiating negotiations for the available resources. | 12-20-2012 |
20120331475 | DYNAMICALLY ALLOCATED THREAD-LOCAL STORAGE - Dynamically allocated thread storage in a computing device is disclosed. The dynamically allocated thread storage is configured to work with a process including two or more threads. Each thread includes a statically allocated thread-local slot configured to store a table. Each table is configured to include a table slot corresponding with a dynamically allocated thread-local value. A dynamically allocated thread-local instance corresponds with the table slot. | 12-27-2012 |
20120331476 | METHOD AND SYSTEM FOR REACTIVE SCHEDULING - A method and system of scheduling demands on a system having a plurality of resources are provided. The method includes the steps of, on receipt of a new demand for resources: determining the total resources required to complete said demand and a deadline for the completion of that demand; determining a plurality of alternative resource allocations which will allow completion of the demand before the deadline; for each of said alternative resource allocations, determining whether, based on allocations of resources to existing demands, said alternative resource allocation will result in a utilization of resources which is closer to an optimum utilization of said resources; and selecting, based on said determination, one of said alternative resource allocations to complete said demand so as to optimise utilisation of resources of the system. | 12-27-2012 |
20120331477 | SYSTEM AND METHOD FOR DYNAMICALLY ALLOCATING HIGH-QUALITY AND LOW-QUALITY FACILITY ASSETS AT THE DATACENTER LEVEL - A system and method are disclosed for dynamically allocating high-quality and low-quality facility assets at the datacenter level. The system and method provide an actuator with information on priorities of information technology (IT) workloads. The actuator ranks the IT workloads according to their priorities, monitors an amount of resources the IT workloads demand, and tracks total capacities of facility assets in the datacenter. The facility assets include high-quality facility assets and low-quality facility assets. According to the direction of the actuator, a distribution mechanism dynamically switches lower priority IT workloads from the high-quality facility assets to the low-quality facility assets when the high-quality facility assets are overburdened. | 12-27-2012 |
20120331478 | METHOD AND DEVICE FOR PROCESSING INTER-SUBFRAME SERVICE LOAD BALANCING AND PROCESSING INTER-CELL INTERFERENCE - The present application provides a method and device for processing inter-subframe service load balancing and processing inter-cell interference, which includes: when processing the inter-subframe service load balancing, determining a service load of a link in a time period; determining a resource utilization ratio threshold according to the service load; and transmitting service data in each subframe according to the utilization ratio threshold. The inter-subframe service load balancing is processed when the inter-cell interference is processed, and in combination with various inter-cell interference coordination technologies, interference mitigation in one of a frequency domain, power and a space domain or the combination thereof is processed by the interference coordination technology. The present application can relieve the phenomenon that the interference mitigation effect is not good as the load information can not adapt well to the dynamic change of the inter-subframe service load in a time division duplex system, and can further mitigate the inter-cell interference in a long term evolution system, and improve the entire throughput performance of the system and the service quality of the subscriber in the system. | 12-27-2012 |
20130007759 | UNIFIED, WORKLOAD-OPTIMIZED, ADAPTIVE RAS FOR HYBRID SYSTEMS - A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes performing a first data computation by a first set of processors, the first set of processors having a first computer processor architecture. The method continues by performing a second data computation by a second processor coupled to the first set of processors, the second processor having a second computer processor architecture, the first computer processor architecture being different than the second computer processor architecture. Finally, the method includes dynamically allocating computational resources of the first set of processors and the second processor based on at least one metric while the first set of processors and the second processor are in operation such that the accuracy and processing speed of the first data computation and the second data computation are optimized. | 01-03-2013 |
20130007760 | Managing Organizational Computing Resources in Accordance with Computing Environment Entitlement Contracts - Mechanisms for reserving computing resources of a data processing system are provided. These mechanisms generate one or more computing environment entitlement contract (CEEC) data structures, each CEEC data structure defining terms of a business level agreement between a contracting party and a provider of the data processing system. These mechanisms associate a set of computing resources with a CEEC data structure. The mechanisms then manage the set of one or more computing resources in accordance with the associated CEEC. Such management includes, in response to a contracting party failing to utilize the computing resources in the selected computing resource cohort for a specified purpose at approximately a specified level and pattern of intensity during approximately a specified period of time, all of which are identified in the CEEC data structure, then the CEEC data structure is invalidated or nullified. | 01-03-2013 |
20130007761 | Managing Computing Environment Entitlement Contracts and Associated Resources Using Cohorting - Mechanisms are provided for managing computing resources relative to a computing environment entitlement contract. These mechanisms generate one or more computing environment entitlement contract (CEEC) data structures, each CEEC data structure defining terms of a business level agreement between a contracting party and a provider of the data processing system. A CEEC cohort is generated comprising a collection of CEECs having similar terms. Utilization of a collection of computing resources in accordance with the similar terms of the collection of CEECs is monitored to identify a usage pattern within the CEEC cohort. Membership of a CEEC in the CEEC cohort based on the identified usage pattern is modified based on the monitored utilization. | 01-03-2013 |
20130007762 | PROCESSING WORKLOADS USING A PROCESSOR HIERARCHY SYSTEM - Workload processing is facilitated by use of a processor hierarchy system. The processor hierarchy system includes a plurality of processor hierarchies, each including one or more processors (e.g., accelerators). Each processor hierarchy has associated therewith a set of characteristics that define the processor hierarchy, and the processors of the hierarchy also have a set of characteristics associated therewith. Workloads are assigned to processors of processor hierarchies depending on characteristics of the workload, characteristics of the processor hierarchies and/or characteristics of the processors. | 01-03-2013 |
20130007763 | GENERATING METHOD, SCHEDULING METHOD, COMPUTER PRODUCT, GENERATING APPARATUS, AND INFORMATION PROCESSING APPARATUS - A generating method is executed by a processor. The method includes executing simulation using a simulation model expressing a processor model, a memory model to which the processor model is accessible, and a load source that accesses the memory model according to an access contention rate, to obtain an index value for performance of the processor model, for each access contention rate; and saving to a memory area and as contention characteristics information, the index value for each access contention rate. | 01-03-2013 |
20130014118 | SIMULTANEOUS SUBMISSION TO A MULTI-PRODUCER QUEUE BY MULTIPLE THREADS - One embodiment of the present invention sets forth a technique for ensuring that multiple producer threads may simultaneously write entries in a shared queue and one or more consumers may read valid data from the shared queue. Additionally, writing of the shared queue by the multiple producer threads may occur in parallel and the one or more consumer threads may read the shared queue while the producer threads write the shared queue. A “wait-free” mechanism allows any producer thread that writes a shared queue to advance an inner pointer that is used by a consumer thread to read valid data from the shared queue. | 01-10-2013 |
20130014119 | Resource Allocation Prioritization Based on Knowledge of User Intent and Process Independence - A method and system to improve performance of a computer system is disclosed. One aspect of certain embodiments includes selectively deallocating or allocating computer resources to a set of computer programs associated with the computer system. | 01-10-2013 |
20130014120 | Fair Software Locking Across a Non-Coherent Interconnect - Access to a shared resource by a plurality of execution units is organized and controlled by issuing tickets to each execution unit as they request access to the resource. The tickets are issued by a hardware atomic unit so that each execution unit receives a unique ticket number. A current owner field indicates the ticket number of the execution unit that currently has access to the shared resource. When an execution unit has completed its access, it releases the shared resource and increments the owner field. Execution units awaiting access to the shared resource periodically check the current value of the owner field and take control of the shared resource when their respective ticket values match the owner field. | 01-10-2013 |
20130014121 | METHOD AND SYSTEM FOR COMMUNICATING BETWEEN ISOLATION ENVIRONMENTS - A method and system for aggregating installation scopes within an isolation environment, where the method includes first defining an isolation environment for encompassing an aggregation of installation scopes. Associations are created between a first application and a first installation scope. When the first application requires the presence of a second application within the isolation environment for proper execution, an image of the required second application is mounted onto a second installation scope and an association between the second application and the second installation scope is created. Another association is created between the first installation scope and the second installation scope, an this third association is created within a third installation scope. Each of the first, second, and third installation scopes are stored and the first application is launched into the defined isolation environment. | 01-10-2013 |
20130014122 | METHOD AND SYSTEM FOR COMMUNICATING BETWEEN ISOLATION ENVIRONMENTS - A method and system for aggregating installation scopes within an isolation environment, where the method includes first defining an isolation environment for encompassing an aggregation of installation scopes. Associations are created between a first application and a first installation scope. When the first application requires the presence of a second application within the isolation environment for proper execution, an image of the required second application is mounted onto a second installation scope and an association between the second application and the second installation scope is created. Another association is created between the first installation scope and the second installation scope, an this third association is created within a third installation scope. Each of the first, second, and third installation scopes are stored and the first application is launched into the defined isolation environment. | 01-10-2013 |
20130014123 | DETERMINATION OF RUNNING STATUS OF LOGICAL PROCESSOR - An embodiment provides for operating an information processing system. An aspect of the invention includes allocating an execution interval to a first logical processor of a plurality of logical processors of the information processing system. The execution interval is allocated for use by the first logical processor in executing instructions on a physical processor of the information processing system. The first logical processor determines that a resource required for execution by the first logical processor is locked by another one of the other logical processors. An instruction is issued by the first logical processor to determine whether a lock-holding logical processor is currently running. The lock-holding logical processor waits to release the lock if it is currently running. A command is issued by the first logical processor to a super-privileged process for relinquishing the allocated execution interval by the first logical processor if the locking holding processor is not running. | 01-10-2013 |
20130014124 | REDUCING CROSS QUEUE SYNCHRONIZATION ON SYSTEMS WITH LOW MEMORY LATENCY ACROSS DISTRIBUTED PROCESSING NODES - A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ. | 01-10-2013 |
20130019248 | METHOD AND APPARATUS FOR MONITORING AND SHARING PERFORMANCE RESOURCES OF A PROCESSORAANM Yu; LeiAACI AustinAAST TXAACO USAAGP Yu; Lei Austin TX US - A method and apparatus are described for managing a plurality of performance monitoring resources residing in a plurality of cores of a processor. A plurality of resource queues are maintained. Each resource queue corresponds to a particular one of the performance monitoring resources, and detects conflicts in use of the particular performance monitoring resource by multiple users. The detected conflicts associated with the particular performance monitoring resource are then resolved. A dynamic resource scheduler is used to resolve the detected conflicts, and is driven by an advanced programmable interrupt controller (APIC) timer residing in a particular core of the processor to provide each item, in an items list of a resource queue associated with the particular performance monitoring resource, an equal opportunity to use the particular performance monitoring resource for a predetermined period of time. | 01-17-2013 |
20130019249 | System and Method For Managing Resources of A Portable Computing Device - A method and system for managing resources of a portable computing device is disclosed. The method includes receiving node structure data for forming a node, in which the node structure data includes a unique name assigned to each resource of the node. A node has at least one resource and it may have multiple resources. Each resource may be a hardware or software element. The system includes a framework manger which handles the communications between existing nodes within a node architecture. The framework manager also logs activity of each resource by using its unique name. The framework manager may send this logged activity to an output device, such as a printer or a display screen. The method and system may help reduce or eliminate a need for customized APIs when a new hardware or software element (or both) are added to a portable computing device. | 01-17-2013 |
20130024866 | Topology Mapping In A Distributed Processing System - Topology mapping in a distributed processing system, the distributed processing system including a plurality of compute nodes, each compute node having a plurality of tasks, each task assigned a unique rank, including: assigning each task to a geometry defining the resources available to the task; selecting, from a list of possible data communications algorithms, one or more algorithms configured for the assigned geometry; and identifying, by each task to all other tasks, the selected data communications algorithms of each task in a single collective operation. | 01-24-2013 |
20130024867 | Resource allocation using a library with entitlement - An entitlement vector may be used when selecting a thread for execution in a multi-threading environment in terms of aspects such as priority. An embodiment or embodiments of an information handling apparatus can comprise a library comprising a plurality of functions and components operable to handle a plurality of objects. The information handling apparatus can further comprise an entitlement vector operable to assign entitlement to at least one of a plurality of resources to selected ones of the plurality of functions and components. | 01-24-2013 |
20130024868 | APPARATUS AND METHOD FOR ALLOCATING A TASK - A task allocating apparatus capable of improving task processing performance is provided. The task allocating apparatus measures a core usage of a plurality of tasks that are run in multiple cores, according to predetermined periods, estimates a core usage of each task for a following period based on the measured core usages, and allocates one or more tasks to the multiple cores based on the estimated core usage. | 01-24-2013 |
20130024869 | Picture loading method and terminal - The disclosure provides a picture loading method and a terminal. The method includes determining a number of pictures that can be loaded according to an available memory space, acquiring, from a plurality of pictures, the determined number of pictures beginning at a starting position, and assigning resources for the acquired pictures and preloading the acquired pictures using the resources. The disclosure also provides a picture loading terminal. | 01-24-2013 |
20130024870 | MULTICORE SYSTEM AND ACTIVATING METHOD - A multicore system includes multiple processor cores; a scheduler in each of the processor cores and allocating a process to the processor cores when having a master authority that is an authority to assign processes; and a master controller performing control to repeat until a process to be executed no longer exists, a cycle in which the schedulers transfer the master authority to another processor core after receiving the master authority and before assigning a process to the processor cores, discards the master authority after assigning the process to the processor cores, and enters a state of waiting to receive the master authority. | 01-24-2013 |
20130031559 | METHOD AND APPARATUS FOR ASSIGNMENT OF VIRTUAL RESOURCES WITHIN A CLOUD ENVIRONMENT - A virtual resource assignment capability is disclosed. The virtual resource assignment capability is configured to support provisioning of virtual resources within a cloud environment. The provisioning of virtual resources within a cloud environment includes receiving a user virtual resource request requesting provisioning of virtual resources within the cloud environment, determining virtual resource assignment information specifying assignment of virtual resources within the cloud environment, and provisioning the virtual resources within the cloud environment using the virtual resource assignment information. The assignment of the requested virtual resources within the cloud environment includes assignment of the virtual resource to datacenters of the cloud environment in which the virtual resources will be hosted and, more specifically, to the physical resources within the datacenters of the cloud environment in which the virtual resources will be hosted. The virtual resources may include virtual processor resources, virtual memory resources, and the like. The physical resources may include processor resources, storage resources, and the like (e.g., physical resources of blade servers of racks of datacenters of the cloud environment). | 01-31-2013 |
20130031560 | Batching and Forking Resource Requests In A Portable Computing Device - In a portable computing device having a node-based resource architecture, resource requests are batched or otherwise transactionized to help minimize inter-processing entity messaging or other messaging or provide other benefits. In a resource graph defining the architecture, each node or resource of the graph represents an encapsulation of functionality of one or more resources controlled by a processor or other processing entity, each edge represents a client request, and adjacent nodes of the graph represent resource dependencies. A single transaction of resource requests may be provided against two or more of the resources. Additionally, this single transaction may become forked so that parallel processing among a client issuing the single transaction and the resources handling the requests of the single transaction may occur. | 01-31-2013 |
20130031561 | Scheduling Flows in a Multi-Platform Cluster Environment - Techniques for scheduling multiple flows in a multi-platform cluster environment are provided. The techniques include partitioning a cluster into one or more platform containers associated with one or more platforms in the cluster, scheduling one or more flows in each of the one or more platform containers, wherein the one or more flows are created as one or more flow containers, scheduling one or more individual jobs into the one or more flow containers to create a moldable schedule of one or more jobs, flows and platforms, and automatically converting the moldable schedule into a malleable schedule. | 01-31-2013 |
20130036424 | RESOURCE ALLOCATION IN PARTIAL FAULT TOLERANT APPLICATIONS - A method for allocating a set of components of an application to a set of resource groups includes the following steps performed by a computer system. The set of resource groups is ordered based on respective failure measures and resource capacities associated with the resource groups. An importance value is assigned to each of the components. The importance value is associated with an affect of the component on an output of the application. The components are assigned to the resource groups based on the importance value of each component and the respective failure measures and resource capacities associated with the resource groups. The components with higher importance values are assigned to resource groups with lower failure measures and higher resource capacities. The application may be a partial fault tolerant (PFT) application that comprises PFT application components. The resource groups may comprise a heterogeneous set of resource groups (or clusters). | 02-07-2013 |
20130042252 | Processing resource allocation within an integrated circuit - An integrated circuit | 02-14-2013 |
20130042253 | RESOURCE MANAGEMENT SYSTEM, RESOURCE MANAGEMENT METHOD, AND RESOURCE MANAGEMENT PROGRAM - A resource management system is provided which calculates a safety rate in such a manner that the amount of resources satisfying an SLA does not become excessive. Excess rate calculation means | 02-14-2013 |
20130047163 | Systems and Methods for Detecting and Tolerating Atomicity Violations Between Concurrent Code Blocks - The system and methods described herein may be used to detect and tolerate atomicity violations between concurrent code blocks and/or to generate code that is executable to detect and tolerate such violations. A compiler may transform program code in which the potential for atomicity violations exists into alternate code that tolerates these potential violations. For example, the compiler may inflate critical sections, transform non-critical sections into critical sections, or coalesce multiple critical sections into a single critical section. The techniques described herein may utilize an auxiliary lock state for locks on critical sections to enable detection of atomicity violations in program code by enabling the system to distinguish between program points at which lock acquisition and release operations appeared in the original program, and the points at which these operations actually occur when executing the transformed program code. Filtering and analysis techniques may reduce false positives induced by the transformations. | 02-21-2013 |
20130047164 | METHOD OF SCHEDULING JOBS AND INFORMATION PROCESSING APPARATUS IMPLEMENTING SAME - A computer produces a first schedule of jobs including ongoing jobs and pending jobs which is to cause a plurality of computing resources to execute the pending jobs while preventing suspension of the ongoing jobs running on the computing resources. The computer also produces a second schedule of the jobs which allows the ongoing jobs to be suspended and rescheduled to cause the computing resources to execute the suspended jobs and pending jobs. Based on the produced first and second schedules, the computer calculates an advantage factor representing advantages to be obtained by suspending jobs, as well as a loss factor representing losses to be caused by suspending jobs. The computer chooses either the first schedule or the second schedule, based on a comparison between the advantage factor and loss factor. | 02-21-2013 |
20130055277 | Logical Partition Load Manager and Balancer - A mechanism is provided in a data processing system for managing and balancing load in multiple managed systems in a logical partitioning data processing system. Responsive to a critical logical partition requiring additional resources, the mechanism determines whether one or more managed systems have available resources to satisfy resource requirements of the critical partition. The mechanism performs at least one partition migration operation to move at least one logical partition between managed systems responsive to determining that one or more managed systems have available resources to satisfy resource requirements of the critical partition. The mechanism performs at least one dynamic logical partitioning operation to allocate resources to at least one of the one or more critical logical partitions responsive to performing the at least one partition migration operation. | 02-28-2013 |
20130055278 | EFFICIENT MANAGEMENT OF COMPUTER RESOURCES - System, method, and computer-readable medium for managing removal of unused objects on a subject computer system that includes a plurality of computing resources. Current configuration and operational state information of a subject computer system are analyzed to detect a presence of unused objects on the subject computer system. An estimated degree of impact that unused objects have on the workload of at least one computing resource of the plurality of computing resources is obtained. A measure of the exigency of taking action to remove the unused objects is determined based on the estimated degree of impact and on the current degree of workload of the at least one computing resource. Instructions are generated for removing specific ones of the unused objects for which the exigency of taking action is sufficiently great. | 02-28-2013 |
20130055279 | RESOURCE ALLOCATION TREE - A resource credit tree for resource management includes leaf nodes and non-leaf nodes. The non-leaf nodes include a root node and internal nodes. Resource management includes initializing a operation corresponding to a resource pool, selecting, using a hash function, a leaf node of a resource credit tree, and identifying a number of available credits of the leaf node. Resource management may further include traversing, using a backward traversal path, from the leaf node to a non-leaf node based on determining that the number of available credits is less than a required number of credits or determining that capacity of the leaf node is less than the summation of the number of credits to free to the resource credit tree and the number of available credits. Resource management may further allocating and freeing credits from and to the resource credit tree. | 02-28-2013 |
20130055280 | QUALITY OF SERVICE AWARE CAPTIVE AGGREGATION WITH TRUE DATACENTER TESTING - Technologies are generally described for efficient datacenter management. In some examples, client jobs on a datacenter are subcontracted out to another datacenter so that the subcontracted jobs can be pulled back to the original datacenter when capacity becomes available again. A captive aggregator module combining network message, command management, data analysis on QoS effects of traffic relay, and strategic management decides whether it makes sense to keep tasks captively aggregated. A middleware system for testing true performance of an application or surrogate and optimizing among multiple datacenters based on true performance evaluates candidate datacenters prior to subcontracting jobs from the original datacenter. The middleware deploys the application to multiple candidate clouds to perform substantially similar tasks on resources that claim to be roughly equivalent in price/performance. The middleware receives data on actual cost and resource consumption, analyzes differences, and redistributes actual work tasks to take advantage of differences. | 02-28-2013 |
20130055281 | INFORMATION PROCESSING APPARATUS AND SCHEDULING METHOD - An information processing apparatus includes a plurality core sections, an uncore section, and a scheduler. The plurality of core sections correspond to processor cores in a multi-core processor. The uncore section is a resource shared by the core sections. The scheduler controls execution timing for a first process so as to make an unused core section execute the first process in a period in which a second process other than the first process is executed by a part of the plurality of core sections. Controlling the execution timing for the first process is permitted. | 02-28-2013 |
20130055282 | TASK MANAGEMENT METHOD FOR EMBEDDED SYSTEMS - The present invention relates to a task management method and, in particular, to a task management method for efficiently managing task in resource constrained embedded systems where a Memory Management Unit does not exist. In accordance with an aspect of the present invention, a task management method for an embedded system, comprising designing a task management according to a short term task and a long term task; managing stack spaces for the short term task and the long term task; scheduling a time and a space for the short term task and the long term task; performing an adaptive stack management based on stack profiling; providing a uniform programming model for the short term task and the long term task; and implementing the short term task and the long term task | 02-28-2013 |
20130055283 | Workload Performance Control - Methods to provide workload performance control are described herein. Performance statistics for a plurality of workloads are obtained for a second time interval, which includes a plurality of first time intervals. The performance statistics is based on monitored data ( | 02-28-2013 |
20130061235 | METHOD AND SYSTEM FOR MANAGING PARALLEL RESOURCE REQUESTS IN A PORTABLE COMPUTING DEVICE - A method and system for managing parallel resource requests in a portable computing device (“PCD”) are described. The system and method includes generating a first request from a first client, the first request issued in the context of a first execution thread. The first request may be forwarded to a resource. The resource may acknowledge the first request and initiate asynchronous processing. The resource may process the first request while allowing the first client to continue processing in the first execution thread. The resource may signal completion of the processing of the first request and may receive a second request. The second request causes completion of the processing of the first request. The completion of the processing of the first request may include updating a local representation of the resource to a new state and invoking any registered callbacks. The resource may become available to service the second request, and may process the second request. | 03-07-2013 |
20130061236 | SYSTEM AND METHOD FOR REDUCING POWER REQUIREMENTS OF MICROPROCESSORS THROUGH DYNAMIC ALLOCATION OF DATAPATH RESOURCES - There is provided a system and methods for segmenting datapath resources such as reorder buffers, physical registers, instruction queues and load-store queues, etc. in a microprocessor so that their size may be dynamically expanded and contracted. This is accomplished by allocating and deallocating individual resource units to each resource based on sampled estimates of the instantaneous resource needs of the program running on the microprocessor. By keeping unused datapath resources to a minimum, power and energy savings are achieved by shutting off resource units that are not needed for sustaining the performance requirements of the running program. Leakage energy and switching energy and power are reduced using the described methods. | 03-07-2013 |
20130067485 | Method And Apparatus For Providing Isolated Virtual Space - Various embodiments provide a method and apparatus of creating an application isolated virtual space without the need to run multiple OSs. Application isolated virtual spaces are created by an Operating System (OS) utilizing a resource manager. The resource manager isolates applications from each other by re-writing the network stack and the I/O subsystem of the conventional OS kernel to have multiple isolated network stack/virtual I/O views of the physical resources managed by the OS. Isolated network stacks and virtual I/O views identify the resources allocated to an application's isolated virtual space and are mapped to applications via an isolating identifier. | 03-14-2013 |
20130074090 | DYNAMIC OPERATING SYSTEM OPTIMIZATION IN PARALLEL COMPUTING - A method for dynamic optimization of thread assignments for application workloads in an simultaneous multi-threading (SMT) computing environment includes monitoring and periodically recording an operational status of different processor cores each supporting a number of threads of the thread pool of the SMT computing environment and also operational characteristics of different workloads of a computing application executing in the SMT computing environment. The method further can include identifying by way of the recorded operational characteristics a particular one of the workloads demonstrating a threshold level of activity. Finally, the method can include matching a recorded operational characteristic of the particular one of the workloads to a recorded status of a processor core best able amongst the different processor cores to host execution in one or more threads of the particular one of the workloads and directing the matched processor core to host execution of the particular one of the workloads. | 03-21-2013 |
20130074091 | TECHNIQUES FOR ENSURING RESOURCES ACHIEVE PERFORMANCE METRICS IN A MULTI-TENANT STORAGE CONTROLLER - Techniques for ensuring performance metrics are met by resources in a multi-tenant storage controller are presented. Each resource of the multi-tenant storage controller is tracked on a per tenant bases. Usage limits are enforced on per resource and per tenant bases for the multi-tenant storage controller. | 03-21-2013 |
20130074092 | Optimized Memory Configuration Deployed on Executing Code - A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed at runtime. An execution environment may capture a memory allocation boundary, look up the boundary in a configuration file, and apply the settings when the settings are available. When the settings are not available, a default set of settings may be used. The execution environment may deploy the optimized settings without modifying the executing code. | 03-21-2013 |
20130074093 | Optimized Memory Configuration Deployed Prior to Execution - A configurable memory allocation and management system may generate a configuration file with memory settings that may be deployed prior to runtime. A compiler or other pre-execution system may detect a memory allocation boundary and decorate the code. During execution, the decorated code may be used to look up memory allocation and management settings from a database or to deploy optimized settings that may be embedded in the decorations. | 03-21-2013 |
20130074094 | EXECUTING MULTIPLE THREADS IN A PROCESSOR - Provided are a method, system, and program for executing multiple threads in a processor. Credits are set for a plurality of threads executed by the processor. The processor alternates among executing the threads having available credit. The processor decrements the credit for one of the threads in response to executing the thread and initiates an operation to reassign credits to the threads in response to depleting all the thread credits. | 03-21-2013 |
20130074095 | HANDLING AND REPORTING OF OBJECT STATE TRANSITIONS ON A MULTIPROCESS ARCHITECTURE - Techniques are described for managing states of an object using a finite-state machine. The states may be used to indicate whether an object has been added, removed, requested or updated. Embodiments of the invention generally include dividing a process into at least two threads where a first thread changes the state of the object while the second thread performs the processing of the data found in the object. While the second thread is processing the data, the first thread may receive additional updates and change the states of the objects to inform the second thread that it should process the additional updates when the second thread becomes idle. | 03-21-2013 |
20130081043 | Resource allocation using entitlement hints - An embodiment of an information handling apparatus can comprise an entitlement vector operable to specify resources used by at least one object of a plurality of a plurality of objects, and logic operable to issue a hint instruction based on the entitlement vector for usage in scheduling the resources. | 03-28-2013 |
20130081044 | Task Switching and Inter-task Communications for Multi-core Processors - The invention provides hardware based techniques for switching processing tasks of software programs for execution on a multi-core processor. Invented techniques involve a hardware logic based controller for assigning, adaptive to program processing loads, tasks for processing by cores of a multi-core fabric as well as configuring a set of multiplexers to appropriately interconnect cores of the fabric and program task specific segments at fabric memories, to arrange efficient inter-task communication as well as transferring of activating and de-activating task memory images among the multi-core fabric. The invention thereby provides an efficient, hardware-automated runtime operating system for multi-core processors, minimizing any need to use processing capacity of the cores for traditional operating system software functions. Additionally, such low overhead hardware based operating system for multi-core processors provides significant cost-efficiency and performance advantages, including data processing throughput maximization across all programs dynamically sharing a given multi-core processor, and hardware based security. | 03-28-2013 |
20130081045 | APPARATUS AND METHOD FOR PARTITION SCHEDULING FOR MANYCORE SYSTEM - An apparatus for performing partition scheduling in a manycore environment. The apparatus may perform partition scheduling based on a priority and in this instance, may perform partition scheduling to minimize the number of idle cores. The apparatus may include a partition queue to manage a partition scheduling event; a partition scheduler including a core map to store hardware information of each of the plurality of cores; and a partition manager to perform partition scheduling with respect to the plurality of cores in response to the partition scheduling event, using the hardware information. | 03-28-2013 |
20130081046 | ANALYSIS OF OPERATOR GRAPH AND DYNAMIC REALLOCATION OF A RESOURCE TO IMPROVE PERFORMANCE - An operator graph analysis mechanism analyzes an operator graph corresponding to an application for problems as the application runs, and determines potential reallocations from a reallocation policy. The reallocation policy may specify potential reallocations depending on whether one or more operators in the operator graph are compute bound, memory bound, communication bound, or storage bound. The operator graph analysis mechanism includes a resource reallocation mechanism that can dynamically change allocation of resources in the system at runtime to address problems detected in the operator graph. The operator graph analysis mechanism thus allows an application represented by an operator graph to dynamically evolve over time to optimize its performance at runtime. | 03-28-2013 |
20130081047 | MANAGING A WORKLOAD OF A PLURALITY OF VIRTUAL SERVERS OF A COMPUTING ENVIRONMENT - An integrated hybrid system is provided. The hybrid system includes compute components of different types and architectures that are integrated and managed by a single point of control to provide federation and the presentation of the compute components as a single logical computing platform. | 03-28-2013 |
20130086592 | CORRELATION OF RESOURCES - A filter driver arranged to be executed on a processor of a terminal. The filter driver, when executed, is arranged to (i) receive a request for a first resource relating to a device installed in the terminal; (ii) determine if the requested first resource is appropriate for the device; and (iii) provide a second resource if the first resource is inappropriate for the device. | 04-04-2013 |
20130091507 | OPTIMIZING DATA WAREHOUSING APPLICATIONS FOR GPUS USING DYNAMIC STREAM SCHEDULING AND DISPATCH OF FUSED AND SPLIT KERNELS - Systems and methods for managing a processor and one or more co-processors for a database application whose queries have been processed into an intermediate form (IR) containing kernels of the database application that have been fused and split; dynamically scheduling such kernels on CUDA streams and further dynamically dispatching kernels to GPU devices by estimating execution time in order to achieve high performance. | 04-11-2013 |
20130091508 | SYSTEM AND METHOD FOR STRUCTURING SELF-PROVISIONING WORKLOADS DEPLOYED IN VIRTUALIZED DATA CENTERS - The system and method for structuring self-provisioning workloads deployed in virtualized data centers described herein may provide a scalable architecture that can inject intelligence and embed policies into managed workloads to provision and tune resources allocated to the managed workloads, thereby enhancing workload portability across various cloud and virtualized data centers. In particular, the self-provisioning workloads may have a packaged software stack that includes resource utilization instrumentation to collect utilization metrics from physical resources that a virtualization host allocates to the workload, a resource management policy engine to communicate with the virtualization host to effect tuning the physical resources allocated to the workload, and a mapping that the resource management policy engine references to request tuning the physical resources allocated to the workload from a management domain associated with the virtualization host. | 04-11-2013 |
20130097608 | Processor With Efficient Work Queuing - Work submitted to a co-processor enters through one of multiple input queues, used to provide various quality of service levels. In-memory linked-lists store work to be performed by a network services processor in response to lack of processing resources in the network services processor. The work is moved back from the in-memory inked-lists to the network services processor in response to availability of processing resources in the network services processor. | 04-18-2013 |
20130097609 | System and Method for Determining Thermal Management Policy From Leakage Current Measurement - Various embodiments of methods and systems for determining the thermal status of processing components within a portable computing device (“PCD”) by measuring leakage current on power rails associated with the components are disclosed. One such method involves measuring current on a power rail after a processing component has entered a “wait for interrupt” mode. Advantageously, because a processing component may “power down” in such a mode, any current remaining on the power rail associated with the processing component may be attributable to leakage current. Based on the measured leakage current, a thermal status of the processing component may be determined and thermal management policies consistent with the thermal status of the processing component implemented. Notably, it is an advantage of embodiments that the thermal status of a processing component within a PCD may be established without the need to leverage temperature sensors. | 04-18-2013 |
20130097610 | DETERMINING SUITABLE NETWORK INTERFACE FOR PARTITION DEPLOYMENT/RE-DEPLOYMENT IN A CLOUD ENVIRONMENT - Migrating a logical partition (LPAR) from a first physical port to a first target physical port, includes determining a configuration of an LPAR having allocated resources residing on a computer and assigned to the first physical port of the computer. The configuration includes a label that specifies a network topology that is provided by the first physical port and the first target physical port has a port label that matches the label included in the configuration of the LPAR. The first target physical port with available capacity to service the LPAR is identified and the LPAR is migrated from the first physical port to the target physical port by reassigning the LPAR to the first target physical port. | 04-18-2013 |
20130097611 | UNIFIED, WORKLOAD-OPTIMIZED, ADAPTIVE RAS FOR HYBRID SYSTEMS - A method, system, and computer program product for maintaining reliability in a computer system. In an example embodiment, the method includes performing a first data computation by a first set of processors, the first set of processors having a first computer processor architecture. The method continues by performing a second data computation by a second processor coupled to the first set of processors, the second processor having a second computer processor architecture, the first computer processor architecture being different than the second computer processor architecture. Finally, the method includes dynamically allocating computational resources of the first set of processors and the second processor based on at least one metric while the first set of processors and the second processor are in operation such that the accuracy and processing speed of the first data computation and the second data computation are optimized. | 04-18-2013 |
20130097612 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node. | 04-18-2013 |
20130104140 | RESOURCE AWARE SCHEDULING IN A DISTRIBUTED COMPUTING ENVIRONMENT - Systems and methods for resource aware scheduling of processes in a distributed computing environment are described herein. One aspect provides for accessing at least one job and at least one resource on a distributed parallel computing system; generating a current reward value based on the at least one job and a current value associated with the at least one resource; generating a prospective reward value based on the at least one job and a prospective value associated with the at least one resource at a predetermined time; and scheduling the at least one job based on a comparison of the current reward value and the prospective reward value. Other embodiments and aspects are also described herein. | 04-25-2013 |
20130104141 | DIVIDED CENTRAL DATA PROCESSING, - A circuit configuration for a data processing system and a corresponding method for executing multiple tasks by way of a central processing unit having a processing capacity assigned to the processing unit, the circuit configuration being configured to distribute the processing capacity of the processing unit uniformly among the respective tasks, and to process the respective tasks in time-offset fashion until they are respectively executed. | 04-25-2013 |
20130104142 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A CPU executes initialization for allocating a storage area of an auxiliary storage device for a program execution area after a particular application program is loaded into the program execution area and becomes executable. Subsequently, the CPU loads a plurality of application programs into the program execution area. | 04-25-2013 |
20130111491 | Entitlement vector with resource and/or capabilities fields | 05-02-2013 |
20130111492 | Information Processing System, and Its Power-Saving Control Method and Device | 05-02-2013 |
20130111493 | DYNAMICALLY SPLITTING JOBS ACROSS MULTIPLE AGNOSTIC PROCESSORS IN WIRELESS SYSTEM | 05-02-2013 |
20130117758 | COMPUTE WORK DISTRIBUTION REFERENCE COUNTERS - One embodiment of the present invention sets forth a technique for managing the allocation and release of resources during multi-threaded program execution. Programmable reference counters are initialized to values that limit the amount of resources for allocation to tasks that share the same reference counter. Resource parameters are specified for each task to define the amount of resources allocated for consumption by each array of execution threads that is launched to execute the task. The resource parameters also specify the behavior of the array for acquiring and releasing resources. Finally, during execution of each thread in the array, an exit instruction may be configured to override the release of the resources that were allocated to the array. The resources may then be retained for use by a child task that is generated during execution of a thread. | 05-09-2013 |
20130117759 | Network Aware Process Scheduling - A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices. | 05-09-2013 |
20130125129 | GROWING HIGH PERFORMANCE COMPUTING JOBS - The preemption of running jobs by other running or queued jobs in a system that has processing resources. The system has running jobs, and queued jobs that are awaiting processing by the system. In a scheduling operation, preemptor jobs are identified, the preemptor jobs being jobs that are candidates for preempting one or more of the running jobs. The preemptor jobs include queued jobs, as well as running jobs that are capable of using more processing resource of the system. One of the other running jobs is preempted to free processing resources for the running job that was identified as a preemptor job. Accordingly, not only may queued jobs preempt running jobs, but currently running jobs may preempt other currently running jobs. | 05-16-2013 |
20130125130 | CONSERVING POWER THROUGH WORK LOAD ESTIMATION FOR A PORTABLE COMPUTING DEVICE USING SCHEDULED RESOURCE SET TRANSITIONS - A start time to begin transitioning resources to states indicated in the second resource state set is scheduled based upon an estimated amount of processing time to complete transitioning the resources. At a scheduled start time, a process starts in which the states of one or more resources are switched from states indicated by the first resource state set to states indicated by the second resource state set. Scheduling the process of transitioning resource states to begin at a time that allows the process to be completed just in time for the resource states to be immediately available to the processor upon entering the second application state helps minimize adverse effects of resource latency. This calculation for the time that the process should be completed just in time may be enhanced when system states and transitions between states are measured accurately and stored in memory of the portable computing device. | 05-16-2013 |
20130125131 | MULTI-CORE PROCESSOR SYSTEM, THREAD CONTROL METHOD, AND COMPUTER PRODUCT - A multi-core processor system includes a first core configured to detect a state where a first thread that is allocated to a first core and a second thread that is allocated to a second core access a common resource; calculate, upon detecting the state and based on a first cycle for the first thread to be allocated to the first core and a second cycle for the second thread to be allocated to the second core, a contention cycle for the first and the second threads to cause access contention for the resource; and select a thread allocated at a time before or after the contention cycle of a core to which a given thread that is either the first or the second thread is allocated at the contention cycle; and a second core configured to switch the times at which the given thread and the selected thread are allocated. | 05-16-2013 |
20130125132 | INFORMATION PROCESSING APPARATUS AND CONTROL METHOD - An information processing apparatus includes plural CPUs to operate in parallel, a logical CPU generating part to generate one or more logical CPUs from one of the CPUs, an operating frequency averaging part to change each of operating frequencies of the CPUs to match a mean of the operating frequencies, and a logical CPU allocation part to cause the logical CPU generating part to generate the logical CPU to eliminate an excess or a deficiency of a processing capability with respect to an information processing load associated with a partition to which the logical CPU belonging to the CPU is allocated, the excess or deficiency being generated due to a change in the operating frequencies of the CPUs made by the operating frequency averaging part, and to allocate the generated logical CPU to the partition associated with the excess or deficiency of the processing capability of the logical CPU. | 05-16-2013 |
20130132966 | Video Player Instance Prioritization - A video player instance may be prioritized and decoding and rendering resources may be assigned to the video player instance accordingly. A video player instance may request use of a resource combination. Based on a determined priority a resource combination may be assigned to the video player instance. A resource combination may be reassigned to another video player instance upon detection that the previously assigned resource combination is no longer actively in use. | 05-23-2013 |
20130132967 | OPTIMIZING DISTRIBUTED DATA ANALYTICS FOR SHARED STORAGE - Methods, systems, and computer executable instructions for performing distributed data analytics are provided. In one exemplary embodiment, a method of performing a distributed data analytics job includes collecting application-specific information in a processing node assigned to perform a task to identify data necessary to perform the task. The method also includes requesting a chunk of the necessary data from a storage server based on location information indicating one or more locations of the data chunk and prioritizing the request relative to other data requests associated with the job. The method also includes receiving the data chunk from the storage server in response to the request and storing the data chunk in a memory cache of the processing node which uses a same file system as the storage server. | 05-23-2013 |
20130132968 | MECHANISM FOR ASYNCHRONOUS INPUT/OUTPUT (I/O) USING ALTERNATE STACK SWITCHING IN KERNEL SPACE - A mechanism for asynchronous input/output (I/O) using second stack switching in kernel space is disclosed. A method of the invention includes receiving, by a kernel executing in a computing device, an input/output (I/O) request from an application thread executing using a first stack, allocating a second stack in kernel space of the computing device, switching execution of the thread to the second stack, and processing the I/O request synchronously using the second stack. | 05-23-2013 |
20130132969 | Methods And Apparatuses For Controlling Thread Contention - An apparatus comprises a plurality of cores and a controller coupled to the cores. The controller is to lower an operating point of a first core if a first number based on processor clock cycles per instruction (CPI) associated with a second core is higher than a first threshold. The controller is operable to increase the operating point of the first core if the first number is lower than a second threshold. | 05-23-2013 |
20130132970 | MULTITHREAD PROCESSING DEVICE, MULTITHREAD PROCESSING SYSTEM, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MULTITHREAD PROCESSING PROGRAM - Provided is a multithread processing device that includes a managing unit that assigns a free thread among a plurality of threads to at least one of a plurality of processes, and a processing unit that executes the one process to which the free thread is assigned by the managing unit, wherein, when a request is transmitted from a first process among the plurality of processes by the processing unit, the managing unit releases a thread assigned to the first process to be a free thread, and ends the first process, and when a response to the request is received by the processing unit, the managing unit assigns a free thread to a second process of executing a process related to the response among the plurality of processes. | 05-23-2013 |
20130139169 | JOB SCHEDULING TO BALANCE ENERGY CONSUMPTION AND SCHEDULE PERFORMANCE - A computer program product including computer usable program code embodied on a computer usable medium, the computer program product comprising: computer usable program code for identifying job performance data for a plurality of representative jobs; computer usable program code for running a simulation of backfill-based job scheduling of the plurality of jobs at various combinations of a run-time over-estimation value and a processor adjustment value, wherein the simulation generates data including energy consumption and job delay; computer usable program code for identifying one of the combinations of a run-time over-estimation value and a processor adjustment value that optimize the mathematical product of an energy consumption parameter and a job delay parameter using the simulation generated data for the plurality of jobs; and computer usable program code for scheduling jobs submitted to a processor using the identified combination of a run-time over-estimation value and a processor adjustment value. | 05-30-2013 |
20130139170 | JOB SCHEDULING TO BALANCE ENERGY CONSUMPTION AND SCHEDULE PERFORMANCE - An energy-aware backfill scheduling method combines overestimation of job run-times and processor adjustments, such as dynamic voltage and frequency scaling, to balance overall schedule performance and energy consumption. Accordingly, some scheduled jobs are executed in a manner reducing energy consumption. A computer-implemented method comprises identifying job performance data for a plurality of representative jobs and running a simulation of backfill-based job scheduling of the jobs at various combinations of run-time over-estimation values and processor adjustment values. The simulation generates data including energy consumption and job delay. The method further identifies one of the combinations of values that optimizes the mathematical product of an energy consumption parameter and a job delay parameter using the simulation generated data for the plurality of jobs. Jobs submitted to a processor are then scheduled using the identified combination of a run-time over-estimation value and a processor adjustment value. | 05-30-2013 |
20130139171 | METHOD AND APPARATUS FOR GENERATING METADATA FOR DIGITAL CONTENT - A method and an apparatus for generating metadata for digital content are described, which allow to review the generated metadata already in course of ongoing generation of metadata. The metadata generation is split into a plurality of processing tasks, which are allocated to two or more processing nodes. The metadata generated by the two or more processing nodes is gathered and visualized on an output unit. | 05-30-2013 |
20130139172 | CONTROLLING THE USE OF COMPUTING RESOURCES IN A DATABASE AS A SERVICE - A method and apparatus controls use of a computing resource by multiple tenants in DBaaS service. The method includes intercepting a task that is to access a computer resource, the task being an operating system process or thread; identifying a tenant that is in association with the task from the multiple tenants; determining other tasks of the tenant that access the computing resource; and controlling the use of the computing resource by the task, so that the total amount of usage of the computing resource by the task and the other tasks does not exceed the limit of usage of the computing resource for the tenant. | 05-30-2013 |
20130139173 | MULTI-CORE RESOURCE UTILIZATION PLANNING - Techniques for multi-core resource utilization planning are provided. An agent is deployed on each core of a multi-core machine. The agents cooperate to perform one or more tests. The tests result in measurements for performance and thermal characteristics of each core and each communication fabric between the cores. The measurements are organized in a resource utilization map and the map is used to make decisions regarding core assignments for resources. | 05-30-2013 |
20130139174 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects metrics of the system, nodes, application, jobs and processing units that will be used to determine how to best allocate the jobs on the system. A job optimizer analyzes the collected metrics to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where the processing units are over utilizing the resources on the node. | 05-30-2013 |
20130145374 | SYNCHRONIZING JAVA RESOURCE ACCESS - A method and an apparatus for synchronizing Java resource access. The method includes configuring for a first access interface of a resource set, a first monitor, and configuring, for a second access interface of the resource set, a second monitor, configuring, for the first monitor, a first waiting queue, and the second monitor, a second waiting queue, in response to the first access interface receiving an access request for a resource from a thread, the first monitor querying whether the resource set has a resource satisfying the access request, in response to a positive querying result, the thread obtains the resource and notifies the second monitor to awake a thread in the second waiting queue, in response to a negative querying result, the first monitor puts the thread in the first waiting queue to queue up. | 06-06-2013 |
20130145375 | PARTITIONING PROCESSES ACROSS CLUSTERS BY PROCESS TYPE TO OPTIMIZE USE OF CLUSTER SPECIFIC CONFIGURATIONS - A system and method for virtualization and cloud security are disclosed. According to one embodiment, a system comprises a first multi-core processing cluster and a second multi-core processing cluster in communication with a network interface card and software instructions. When the software instructions are executed by the second multi-core processing cluster they cause the second multi-core processing cluster to receive a request for a service, create a new or invoke an existing virtual machine to service the request, and return a desired result indicative of successful completion of the service to the first multi-core processing cluster. | 06-06-2013 |
20130145376 | DATA STORAGE RESOURCE ALLOCATION BY EMPLOYING DYNAMIC METHODS AND BLACKLISTING RESOURCE REQUEST POOLS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan. | 06-06-2013 |
20130145377 | SYSTEM AND METHOD FOR COOPERATIVE VIRTUAL MACHINE MEMORY SCHEDULING - A resource scheduler for managing a distribution of host physical memory (HPM) among a plurality of virtual machines (VMs) monitors usage by each of the VMs of respective guest physical memories (GPM) to determine how much of the HPM should be allocated to each of the VMs. On determining that an amount of HPM allocated to a source VM should be reallocated to a target VM, the scheduler sends allocation parameters to a balloon application executing in the source VM causing it to reserve and write a value to a guest virtual memory (GVM) location in the source VM. The scheduler identifies the HPM location that corresponds to the reserved GVM and allocates it to the target VM by mapping a guest physical memory location of the target VM to the HPM location. | 06-06-2013 |
20130152101 | PREPARING PARALLEL TASKS TO USE A SYNCHRONIZATION REGISTER - A job may be divided into multiple tasks that may execute in parallel on one or more compute nodes. The tasks executing on the same compute node may be coordinated using barrier synchronization. However, to perform barrier synchronization, the tasks use (or attach) to a barrier synchronization register which establishes a common checkpoint for each of the tasks. A leader task may use a shared memory region to publish to follower tasks the location of the barrier synchronization register—i.e., a barrier synchronization register ID. The follower tasks may then monitor the shared memory to determine the barrier synchronization register ID. The leader task may also use a count to ensure all the tasks attach to the BSR. This advantageously avoids any task-to-task communication which may reduce overhead and improve performance. | 06-13-2013 |
20130152102 | RUNTIME-AGNOSTIC MANAGEMENT OF APPLICATIONS - An application may be modeled as a collection of resource usage. The model allows the application to be elastic so that additional resource usage can be added when needed. Items may be added to and/or removed from applications at any time without regard to the state of the application. Existing items in the application may also be altered at any time regardless of the application state. A set of interfaces are used to manage the resources. The interface allow for the provisioning, configuration, deployment, monitoring and diagnostics of resources in a consistent way. | 06-13-2013 |
20130152103 | PREPARING PARALLEL TASKS TO USE A SYNCHRONIZATION REGISTER - A job may be divided into multiple tasks that may execute in parallel on one or more compute nodes. The tasks executing on the same compute node may be coordinated using barrier synchronization. However, to perform barrier synchronization, the tasks use (or attach) to a barrier synchronization register which establishes a common checkpoint for each of the tasks. A leader task may use a shared memory region to publish to follower tasks the location of the barrier synchronization register—i.e., a barrier synchronization register ID. The follower tasks may then monitor the shared memory to determine the barrier synchronization register ID. The leader task may also use a count to ensure all the tasks attach to the BSR. This advantageously avoids any task-to-task communication which may reduce overhead and improve performance. | 06-13-2013 |
20130160019 | Method for Resuming an APD Wavefront in Which a Subset of Elements Have Faulted - A method resumes an accelerated processing device (APD) wavefront in which a subset of elements have faulted. A restore command for a job including a wavefront is received. A list of context states for the wavefront is read from a memory associated with a APD. An empty shell wavefront is created for restoring the list of context states. A portion of not acknowledged data is masked over a portion of acknowledged data within the restored wavefronts. | 06-20-2013 |
20130160020 | GENERATIONAL THREAD SCHEDULER - Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied. | 06-20-2013 |
20130160021 | SIGNALING, ORDERING, AND EXECUTION OF DYNAMICALLY GENERATED TASKS IN A PROCESSING SYSTEM - One embodiment of the present invention sets forth a technique for enabling the insertion of generated tasks into a scheduling pipeline of a multiple processor system allows a compute task that is being executed to dynamically generate a dynamic task and notify a scheduling unit of the multiple processor system without intervention by a CPU. A reflected notification signal is generated in response to a write request when data for the dynamic task is written to a queue. Additional reflected notification signals are generated for other events that occur during execution of a compute task, e.g., to invalidate cache entries storing data for the compute task and to enable scheduling of another compute task. | 06-20-2013 |
20130160022 | TRANSACTION MANAGER FOR NEGOTIATING LARGE TRANSACTIONS - A computer receives a transaction request that includes information identifying computer resource requirements for the transaction, a resource policy, and a transaction failure policy. The computer determines if sufficient computer resources are available to complete the transaction request based on the received information identifying resource requirements for the transaction. If there are not sufficient computer resources available to complete the transaction request, the computer applies the resource policy to the transaction request and processes the transaction request. If the processed transaction request fails to complete successfully, the computer applies the transaction failure policy to the processed transaction request. | 06-20-2013 |
20130160023 | SCHEDULER, MULTI-CORE PROCESSOR SYSTEM, AND SCHEDULING METHOD - In an embodiment, a scheduler coordinates timings at which cores execute processes, for any two sequential processes to consecutively be executable. The processes are executed in order scheduled by the scheduler by concentrating on a specific core processes obstructing the consecutive execution such as an external interrupt and an internal interrupt. The scheduler does not always cause processes of another application to be executed during all standby time periods while the scheduler determines whether a length of a standby time period is shorter than a predetermined value, and does not cause any process of the other application to be executed when the length is shorter than that. | 06-20-2013 |
20130174174 | HIERARCHICAL SCHEDULING APPARATUS AND METHOD FOR CLOUD COMPUTING - A hierarchical scheduling apparatus for a cloud environment includes a schedule configuring unit configured to classify a plurality of tasks into one or more local tasks and one or more remote tasks; a schedule delegating unit configured to transmit, to another resource, a list of the remote tasks and a list of available resources to delegate scheduling authority for the remote tasks to the other resource; and a scheduling unit configured to schedule the local tasks. | 07-04-2013 |
20130174175 | RESOURCE ALLOCATION FOR A PLURALITY OF RESOURCES FOR A DUAL ACTIVITY SYSTEM - Exemplary method, system, and computer program product embodiments for resource allocation of a plurality of resources for a dual activity system by a processor device, are provided. In one embodiment, by way of example only, each of the activities may be started at a static quota. The resource boundary may be increased for a resource request for at least one of the dual activities until a resource request for an alternative one of the at least one of the dual activities is rejected. In response to the rejection of the resource request for the alternative one of the at least one of the dual activities, a resource boundary for the at least one of the dual activities may be reduced, and a wait after decrease mode may be commenced until a current resource usage is one of less than and equal to the reduced resource boundary. | 07-04-2013 |
20130179891 | SYSTEMS AND METHODS FOR USE IN PERFORMING ONE OR MORE TASKS - Systems and methods for performing a task are provided. One example method includes if the task allocation metric indicates load balancing associated with the processor is below a first threshold, determining whether the task is a reentrant task, if the task is a reentrant task, determining whether a stopping criteria is satisfied, re-entering the task into a queue of tasks if the stopping criteria is not satisfied and the task is a reentrant task, if the task allocation metric indicates core affinity associated with the at least one processor is below a second threshold, determining whether the task is a main task, if the task is not a main task, determining whether a stopping criteria is satisfied, and if the stopping criteria is satisfied and the task is not a main task, pulling a parent task associated with the task into the thread. | 07-11-2013 |
20130179892 | PROVIDING LOGICAL PARTIONS WITH HARDWARE-THREAD SPECIFIC INFORMATION REFLECTIVE OF EXCLUSIVE USE OF A PROCESSOR CORE - Techniques for simulating exclusive use of a processor core amongst multiple logical partitions (LPARs) include providing hardware thread-dependent status information in response to access requests by the LPARs that is reflective of exclusive use of the processor by the LPAR accessing the hardware thread-dependent information. The information returned in response to the access requests is transformed if the requestor is a program executing at a privilege level lower than the hypervisor privilege level, so that each logical partition views the processor as though it has exclusive use of the processor. The techniques may be implemented by a logical circuit block within the processor core that transforms the hardware thread-specific information to a logical representation of the hardware thread-specific information or the transformation may be performed by program instructions of an interrupt handler that traps access to the physical register containing the information. | 07-11-2013 |
20130179893 | Adaptation of Probing Frequency for Resource Consumption - Embodiments of the invention relate to dynamically assessing and managing probing of a system for resource availability. A predicted resource usage pattern is acquired, and critical points in the pattern pertaining to predicted changes in resource consumption are identified. Probing the system for resource availability is limited to the identified critical points, or to real-time changes in the resource usage pattern. | 07-11-2013 |
20130179894 | PLATFORM AS A SERVICE JOB SCHEDULING - Systems and methods are presented for providing resources by way of a platform as a service in a distributed computing environment to perform a job. A user may submit a work item to the system that results in a job being processed on a pool of virtual machines. The pool may be automatically established by the system in response to the work item and other information associated with the work item, the user, and/or the account. Further, it is contemplated that resources associated with the pool, such as virtual machines, may be automatically allocated based, at least in part, on information associated with the work item, the user, the account, the pool, and/or the system. | 07-11-2013 |
20130179895 | PAAS HIERARCHIAL SCHEDULING AND AUTO-SCALING - In various embodiments, systems and methods are presented for providing resources by way of a platform as a service in a distributed computing environment to perform a job. The system may be comprised of a number of components, such as a task machine, a task location service machine, and a high-level location service machines that in combination are useable to accomplish functions provided herein. It is contemplated that the system performs methods for providing resources by determining resources of the system, such as virtual machines, and applying auto-scaling rules to the system to scale those resources. Based on the determination of the auto-scaling rules, the resources may be allocated to achieve a desired result. | 07-11-2013 |
20130185729 | ACCELERATING RESOURCE ALLOCATION IN VIRTUALIZED ENVIRONMENTS USING WORKLOAD CLASSES AND/OR WORKLOAD SIGNATURES - Systems, methods, and apparatus for managing resources assigned to an application or service. A resource manager maintains a set of workload classes and classifies workloads using workload signatures. In specific embodiments, the resource manager minimizes or reduces resource management costs by identifying a relatively small set of workload classes during a learning phase, determining preferred resource allocations for each workload class, and then during a monitoring phase, classifying workloads and allocating resources based on the preferred resource allocation for the classified workload. In some embodiments, interference is accounted for by estimating and using an “interference index”. | 07-18-2013 |
20130185730 | MANAGING RESOURCES FOR MAINTENANCE TASKS IN COMPUTING SYSTEMS - Methods for managing resources for maintenance tasks in computing systems are provided. One system includes a controller and memory coupled to the controller, the memory configured to store a module. The controller, when executing the module, is configured to determine an amount of available resources for use by a plurality of maintenance tasks in a computing system and divide the available resources between the plurality of maintenance tasks based on a need for each maintenance task. One method includes determining, by a central controller, an amount of available resources for use by a plurality of maintenance tasks in a computing system and dividing the available resources between the plurality of maintenance tasks based on a need for each maintenance task. Computer storage mediums including a computer program product method for managing resources for maintenance tasks in computing systems are also provided. | 07-18-2013 |
20130191837 | FLEXIBLE TASK AND THREAD BINDING - A thread binding method includes generating a thread layout for processors in a computing system, allocating system resources for tasks of an application allocated to the processors, affinitizing the tasks and generating threads for the tasks. A thread count for each of the tasks is at least one and equal or unequal to that of any other of the tasks. | 07-25-2013 |
20130191838 | SYSTEM AND METHOD FOR SEPARATING MULTIPLE WORKLOADS PROCESSING IN A SINGLE COMPUTER OPERATING ENVIRONMENT - A computing system using a persistent, unique identifier may be used to authenticate the system that ensures software and configurations of systems are properly licensed while permitting hardware components to be replaced. The persistent, unique system identifier may be coupled to serial numbers or similar hardware identifiers of components within the computing system while permitting some of the hardware components to be deleted and changed. When components that are coupled to the persistent, unique identifier are removed or disabled, a predefined time period is provided to update the coupling of the persistent, unique identifier to alternate hardware component in the system. | 07-25-2013 |
20130191839 | INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND COMPUTER-READABLE STORAGE MEDIUM - When a process starts using a resource, management information is stored. Management information includes, in association with one another, process identification information indicating the process, resource identification information indicating the resource to be used by the process, and processor identification information indicating a processor allocated to the process. When waking up the process, a processor that is associated to the process to wake up in the management information is allocated to the process to wake up. | 07-25-2013 |
20130191840 | RESOURCE ALLOCATION BASED ON ANTICIPATED RESOURCE UNDERUTILIZATION IN A LOGICALLY PARTITIONED MULTI-PROCESSOR ENVIRONMENT - A method, apparatus and program product for allocating resources in a logically partitioned multiprocessor environment. Resource usage is monitored in a first logical partition in the logically partitioned multiprocessor environment to predict a future underutilization of a resource in the first logical partition. An application executing in a second logical partition in the logically partitioned multiprocessor environment is configured for execution in the second logical partition with an assumption made that at least a portion of the underutilized resource is allocated to the second logical partition during at least a portion of the predicted future underutilization of the resource. | 07-25-2013 |
20130191841 | Method and Apparatus For Fine Grain Performance Management of Computer Systems - A system and method to control the allocation of processor (or state machine) execution resources to individual tasks executing in computer systems is described. By controlling the allocation of execution resources, to all tasks, each task may be provided with throughput and response time guarantees. This control is accomplished through workload metering shaping which delays the execution of tasks that have used their workload allocation until sufficient time has passed to accumulate credit for execution (accumulate credit over time to perform their allocated work) and workload prioritization which gives preference to tasks based on configured priorities. | 07-25-2013 |
20130198753 | FULL EXPLOITATION OF PARALLEL PROCESSORS FOR DATA PROCESSING - For full exploitation of parallel processors for data processing, a set of parallel processors is partitioned into disjoint subsets according to indices of the set of the parallel processors. The size of each of the disjoint subsets corresponds to a number of processors assigned to the processing of the data chunks at one of the layers. Each of the processors are assigned to different layers in different data chunks such that each of processors are busy and the data chunks are fully processed within a number of the time steps equal to the number of the layers. A transition function is devised from the indices of the set of the parallel processors at one time steps to the indices of the set of the parallel processors at a following time step. | 08-01-2013 |
20130198754 | FULL EXPLOITATION OF PARALLEL PROCESSORS FOR DATA PROCESSING - Exemplary method, system, and computer program product embodiments for full exploitation of parallel processors for data processing are provided. In one embodiment, by way of example only, a set of parallel processors is partitioned into disjoint subsets according to indices of the set of the parallel processors. The size of each of the disjoint subsets corresponds to a number of processors assigned to the processing of the data chunks at one of the layers. Each of the processors are assigned to different layers in different data chunks such that each of processors are busy and the data chunks are fully processed within a number of the time steps equal to the number of the layers. A transition function is devised from the indices of the set of the parallel processors at one time steps to the indices of the set of the parallel processors at a following time step. | 08-01-2013 |
20130198755 | APPARATUS AND METHOD FOR MANAGING RESOURCES IN CLUSTER COMPUTING ENVIRONMENT - Disclosed herein are a resource manager node and a resource management method. The resource manager node includes a resource management unit, a resource policy management unit, a shared resource capability management unit, a shared resource status monitoring unit, and a shared resource allocation unit. The resource management unit performs an operation necessary for resource allocation when a resource allocation request is received. The resource policy management unit determines a resource allocation policy based on the characteristic of the task, and generates resource allocation information. The shared resource capability management unit manages the topology of nodes, information about the capabilities of resources, and resource association information. The shared resource status monitoring unit monitors and manages information about the status of each node and the use of allocated resources. The shared resource allocation unit sends a resource allocation request to at least one of the plurality of nodes. | 08-01-2013 |
20130198756 | TRANSFERRING A PARTIAL TASK IN A DISTRIBUTED COMPUTING SYSTEM - A method begins by a dispersed storage (DS) processing module determining that partial task processing resources of a first DST execution unit are projected to be available. The method continues with the DS processing module ascertaining that partial task processing resources of a second DST execution unit are projected to be overburdened. The method continues with the DS processing module receiving, from the second DST execution unit, a partial task assigned to the second DST execution unit in accordance with a partial task allocation transfer policy to produce an allocated partial task and executing the allocated partial task. | 08-01-2013 |
20130198757 | RESOURCE ALLOCATION METHOD AND APPARATUS OF GPU - A resource allocation method and apparatus utilize the GPU resource efficiently by sorting the tasks using General Purpose GPU (GPGPU) into operations and combining the same operations into a request. The resource allocation method of a Graphic Processing Unit (GPU) according to the present disclosure includes receiving a task including at least one operation; storing the at least one operation in unit of request; merging data of same operations per request; and allocating GPU resource according to an execution order the request. | 08-01-2013 |
20130198758 | TASK DISTRIBUTION METHOD AND APPARATUS FOR MULTI-CORE SYSTEM - The present invention relates generally to a task distribution method and apparatus for systems in a real-time Operating System (OS) environment using a multi-core Central Processing Unit (CPU). The present invention is configured to set roles of multiple cores included in the multi-core system in such a way as to divide the cores into real-time cores for executing real-time tasks and non-real-time cores for executing non-real-time tasks, allocate real-time tasks to cores, a role of which has been set to that of real-time cores, and non-real-time tasks to cores, a role of which has been set to that of non-real-time cores, based on the set roles of the cores, allow the respective cores to execute the tasks allocated thereto, and collect information about a procedure of executing the tasks as task execution procedure information, and change the set roles of the cores based on the collected information. | 08-01-2013 |
20130205300 | METHOD AND SYSTEM FOR MANAGING RESOURCE - The present invention discloses a method and system for managing resources, wherein the method comprises: a resource editor accepts that a user adds a resource and defines an ID of the resource (S | 08-08-2013 |
20130205301 | SYSTEMS AND METHODS FOR TASK GROUPING ON MULTI-PROCESSORS - Embodiments of the present invention provide improved systems and methods for grouping instruction entities. In one embodiment, a system comprises a processing cluster to execute software, the processing cluster comprising a plurality of processing units, wherein the processing cluster is configured to execute the software as a plurality of instruction entities. The processing cluster is further configured to execute the plurality of instruction entities in a plurality of execution groups, each execution group comprising one or more instruction entities, wherein the processing cluster executes a group of instruction entities in the one or more instruction entities in an execution group concurrently. Further, the execution groups are configured so that a plurality of schedule-before relationships are established, each schedule-before relationship being established among a respective set of instruction entities by executing the plurality of instruction entities in the plurality of execution groups. | 08-08-2013 |
20130205302 | INFORMATION PROCESSING TERMINAL AND RESOURCE RELEASE METHOD - In an information processing terminal, a second screen activation monitoring unit that has received a focus OFF notification sends a domain switch request notification to a domain control unit, and the domain control unit that has received the notification sends a domain switch notification to a first OS. Then, the first OS sends a focus ON notification to a first screen activation monitoring unit and further sends the focus OFF notification to a first application. A resource is thereby released by the first application that is implemented to release an acquired resource upon receiving the focus OFF notification. | 08-08-2013 |
20130212593 | Controlled Growth in Virtual Disks - A method, an apparatus and an article of manufacture for controlling growth in virtual disk size. The method includes limiting a guest virtual machine file in a hypervisor from allocating a new disk block as allocated space, wherein a virtual disk on a virtual machine is mapped to the guest virtual machine file, and facilitating the virtual disk to reuse a previously allocated and freed disk block for the allocated space to control growth in virtual disk size. | 08-15-2013 |
20130212594 | METHOD OF OPTIMIZING PERFORMANCE OF HIERARCHICAL MULTI-CORE PROCESSOR AND MULTI-CORE PROCESSOR SYSTEM FOR PERFORMING THE METHOD - Disclosed is a multi-core processor, and more particularly, a method of optimizing performance of a multi-core processor having a hierarchical structure and a multi-core processor system for performing the method. To this end, the method of optimizing performance of a hierarchical multi-core processor including a plurality of kernel cores, each kernel core including a plurality of cores sharing a memory, the method includes calculating a correlation between a plurality of threads by a thread correlation managing module within a main processor; grouping the plurality of threads into two or more threads according to information on the calculated correlation by the main processor; and allocating each of the grouped threads within an equal group to each core within an equal kernel core of the hierarchical multi-core processor by a scheduler of the main processor. | 08-15-2013 |
20130219402 | ROBUST SYSTEM CONTROL METHOD WITH SHORT EXECUTION DEADLINES - A method of controlling a system comprising the following steps:
| 08-22-2013 |
20130219403 | METHOD AND SYSTEM FOR MANAGING RESOURCE CONNECTIONS - Methods and system for managing resource connections are described. In one embodiment, an initial user request to access data stored at a resource is received. The initial user request is generated by an application of a plurality of applications having access to the resource. An existing connection from the application is utilized to provide the data to the application. A current user request to access data stored at the resource is received. Based on a determination that the existing connection is unavailable, the current user request is assigned to a waiter queue. A number of requests assigned to the waiter queue during a pre-defined time period is determined to exceed a threshold. A new connection from the application to the resource is created based on the availability of a further connection to the resource and the exceeding of the threshold. | 08-22-2013 |
20130219404 | Computer System and Working Method Thereof - A computer system and operating method thereof are provided. The computer system comprises a central processing unit ( | 08-22-2013 |
20130227583 | Method and System For Scheduling Requests In A Portable Computing Device - A method and system for managing requests among resources within a portable computing device include a scheduler receiving data from a client for scheduling a plurality of requests. Each request identifies at least one resource and a requested deadline. Next, data from the client is stored by the scheduler in a database. The scheduler then determines times and a sequence for processing the requests based on requested deadlines in the requests and based on current states of resources within the portable computing device. The scheduler then communicates the requests to the resources at the determined times and according to the determined sequence. The scheduler, at its discretion, may schedule a request after its requested deadline in response to receiving a new request command from a client. The scheduler may allow a sleep set corresponding to a sleep processor state to power off a processor. | 08-29-2013 |
20130227584 | LONG-TERM RESOURCE PROVISIONING WITH CASCADING ALLOCATIONS - One embodiment of the present invention provides a system for provisioning physical resources shared by a plurality of jobs. During operation, the system establishes resource-usage models for the jobs, ranks the jobs based on quality of service (QoS) requirements associated with the jobs, and provisions the jobs for a predetermined time interval in such a way that any unused reservations associated with a first subset of jobs having higher QoS rankings are distributed to other remaining jobs with preference given to a second subset of jobs having a highest QoS ranking among the other remaining jobs. Provisioning the jobs involves making reservations for the jobs based on the resource-usage model and corresponding QoS requirements associated with the jobs. | 08-29-2013 |
20130227585 | COMPUTER SYSTEM AND PROCESSING CONTROL METHOD - A processing control method whereby a management server: assigns work to and executes said work on a computer; sets the processing start time and the processing end time for the aforementioned work as task execution information; sets a first physical resource amount, which is the amount of the physical resources of the aforementioned computer needed for execution of the aforementioned processing; acquires a second physical resource amount, which is the amount of the physical resources of the aforementioned computer that are being used; updates the processing start time for the aforementioned work to a time that is close to the current time when the aforementioned computer has the physical resources of the sum of the aforementioned first physical resource amount and the aforementioned second physical resources; and instructs the aforementioned computer to begin the aforementioned processing when the current time reaches the aforementioned processing start time. | 08-29-2013 |
20130232497 | EXECUTION OF A DISTRIBUTED DEPLOYMENT PLAN FOR A MULTI-TIER APPLICATION IN A CLOUD INFRASTRUCTURE - A deployment system orchestrates execution of deployment plan in coordination with nodes participating in deployment of a multi-tier application in a cloud infrastructure. The deployment system distributes local deployment plans to each node and maintains a centralized state of deployment time dependencies between tasks in different local deployment plans. Prior to execution of each task, deployment agents executing on each node communicates with the centralized deployment system to check whether any deployment time dependencies need to be resolved. Additionally, the deployment system utilizes a node task timer that triggers a heartbeat mechanism for monitoring failure of deployment agents. | 09-05-2013 |
20130232498 | SYSTEM TO GENERATE A DEPLOYMENT PLAN FOR A CLOUD INFRASTRUCTURE ACCORDING TO LOGICAL, MULTI-TIER APPLICATION BLUEPRINT - A deployment system enables a developer to generate a deployment plan according to a logical, multi-tier application blueprint defined by application architects. The deployment plan includes tasks to be executed for deploying application components on virtual computing resource provided in a cloud infrastructure. The deployment plan includes time dependencies that determine an execution order of the tasks according to dependencies between application components specified in the application blueprint. The deployment plan enables system administrators to view the application blueprint as an ordered workflow view that facilitates collaboration between system administrators and application architects. | 09-05-2013 |
20130232499 | COMPARE AND EXCHANGE OPERATION USING SLEEP-WAKEUP MECHANISM - A method, apparatus, and system are provided for performing compare and exchange operations using a sleep-wakeup mechanism. According to one embodiment, an instruction at a processor is executed to help acquire a lock on behalf of the processor. If the lock is unavailable to be acquired by the processor, the instruction is put to sleep until an event has occurred. | 09-05-2013 |
20130232500 | CACHE PERFORMANCE PREDICTION AND SCHEDULING ON COMMODITY PROCESSORS WITH SHARED CACHES - A method is described for scheduling in an intelligent manner a plurality of threads on a processor having a plurality of cores and a shared last level cache (LLC). In the method, a first and second scenario having a corresponding first and second combination of threads are identified. The cache occupancies of each of the threads for each of the scenarios are predicted. The predicted cache occupancies being a representation of an amount of the LLC that each of the threads would occupy when running with the other threads on the processor according to the particular scenario. One of the scenarios is identified that results in the least objectionable impacts on all threads, the least objectionable impacts taking into account the impact resulting from the predicted cache occupancies. Finally, a scheduling decision is made according to the one of the scenarios that results in the least objectionable impacts. | 09-05-2013 |
20130232501 | SYSTEM AND METHOD TO REDUCE MEMORY USAGE BY OPTIMALLY PLACING VMS IN A VIRTUALIZED DATA CENTER - Embodiments of the present invention provide a method, system and computer program product for collocating VMs based on memory sharing potential. In an embodiment of the invention, a VM co-location method has been claimed. The method includes selecting a VM from amongst different VMs for server colocation. The method additionally includes computing an individual shared memory factor for each of a set of the VMs with respect to the selected VM. The method yet further includes determining a VM amongst the VMs in the set associated with a highest computed shared memory factor. Finally, the method includes co-locating the determined VM with the selected VM in a single server. | 09-05-2013 |
20130232502 | METHODOLOGY FOR SECURE APPLICATION PARTITIONING ENABLEMENT - A computer implemented method, data processing system, and computer program product for configuring a partition with needed system resources to enable an application to run and process in a secure environment. Upon receiving a command to create a short lived secure partition for a secure application, a short lived secure partition is created in the data processing system. This short lived secure partition is inaccessible by superusers or other applications. System resources comprising physical resources and virtual allocations of the physical resources are allocated to the short lived secure partition. Hardware and software components needed to run the secure application are loaded into the short lived secure partition. | 09-05-2013 |
20130232503 | AUTHORIZING DISTRIBUTED TASK PROCESSING IN A DISTRIBUTED STORAGE NETWORK - A method begins by a distributed storage (DS) processing module transmitting a set of requests to a set of DS units regarding a set of data elements and receiving a set of respective requests from the set of DS units. When the set of respective requests is in accordance with a current distributed task/data responsibility allocation period, the method continues with the DS processing module issuing a set of responses to the set of DS units. The method continues with the DS processing module receiving a set of respective responses from the set of DS units. When the set of received respective responses is in accordance with the current distributed task/data responsibility allocation period, the method continues with the DS processing module processing the set of received respective responses in accordance with the current distributed task/data responsibility allocation period to produce one of a set of results. | 09-05-2013 |
20130239114 | Fine Grained Adaptive Throttling of Background Processes - Approaches for throttling backgrounds processes to a high degree of precision. The utilization of a shared resource that is used by one or more background processes is monitored. A frequency at which the one or more background processes are executed is dynamically adjusted based on the current utilization of the shared resource without adjusting the frequency in which one or more foreground processes are executed to ensure that the utilization of the shared resource does not exceed a threshold value. The monitoring of the utilization of the shared resource may be performed more often than the adjusted of the frequency at which the background processes are executed, and the utilization of the shared resources may be performed many times a second. Consequently, the utilization of the shared resource may be above a certain level (such as 65%) and less than another level, such as 90%, when background processes are executing. | 09-12-2013 |
20130239115 | PROCESSING SYSTEM - A processing system includes a process request queue that corresponds to a process group and additionally stores an arriving process request addressed to the process group, at least one processor that belongs to the process group, and that, upon being enabled to receive a new process request, retrieves a process request from the process request queue, and processes the retrieved process request, and a monitoring unit that monitors a process load of the process group, and that, upon determining through monitoring that the process load of the process group becomes lower than a predetermined contraction threshold value, issues a group contraction instruction to the process group. | 09-12-2013 |
20130239116 | SYSTEMS AND METHODS FOR SPILLOVER IN A MULTI-CORE SYSTEM - The present invention is directed towards systems and methods for spillover threshold management in a multi-core system. A pool manager divides the spillover threshold limit of connections for vServers into an exclusive quota pool and a shared quota pool. Each vServer operating on a core is allocated an exclusive number of connections from the exclusive quota pool. If a vServer wishes to create connections beyond its exclusive number, the vServer can borrow from the shared quota pool. When the vServers are using at least a first predetermined threshold of their exclusive number of connections and the number of available connections in the shared quota pool has reached a second predetermined threshold, the multi-core system establishes a backup vServer. | 09-12-2013 |
20130239117 | MANAGING OPERATION REQUESTS USING DIFFERENT RESOURCES - Provided is a method for managing operation requests using different resources. In one embodiment, a first queue is provided for operations which utilize a first resource of a first and second resource. A second queue is provided for operations which utilize the second resource. An operation is queued on the first queue until the first resource is acquired. The first resource is released if the second resource is not also acquired. The operation is queued on the second queue when the first resource is acquired but the second resource is not. In addition, the first resource is released until the operation acquires both the first resource and the second resource. | 09-12-2013 |
20130239118 | METHOD AND SYSTEM FOR AN ATOMIZING FUNCTION OF A MOBILE DEVICE - Systems, apparatuses and methods are disclosed for apportioning tasks among devices. One such method is performed in handheld wireless communication device (HWCD). The method includes discovering available resources in a network and dynamically assessing cost functions for performing a task on the HWCD and on each of the discovered resources. Each of the respective cost functions is based on performance factors associated with the HWCD or with one of the devices. Based on change in the cost functions, the task is apportioned for local execution by the HWCD or remote execution by the available resources. | 09-12-2013 |
20130247059 | CALCULATING AND COMMUNICATING LEVEL OF CARBON OFFSETTING REQUIRED TO COMPENSATE FOR PERFORMING A COMPUTING TASK - During performance of a specified computing task data concerning resource consumption regarding that specified computing task is gathered and stored. Upon completion of the specified computing task, the amount of carbon offset required to compensate for resource consumption associated with performance of the completed specified computing task is calculated based upon stored or known resource consumption data. The calculated amount of carbon offset information may be transmitted to a carbon offset function provider, and a carbon offset function provider implements the specified amount of carbon offset based upon the calculated amounts communicated for the completed specified computing task. | 09-19-2013 |
20130247060 | APPARATUS AND METHOD FOR PROCESSING THREADS REQUIRING RESOURCES - A data processing apparatus has processing circuitry for processing threads using resources accessible to the processing circuitry. Thread handling circuitry handles pending threads which are waiting for resources required for processing. When a request is made for a resource which is not available, a lock is set to ensure that once the resource becomes available, the resource remains available until the lock is removed. This prevents other threads reallocating the resource. When a subsequent pending thread requests access to the same locked unavailable resource, the lock is transferred to that subsequent thread so that the latest thread accessing that resource is considered the lock owning thread. The lock is removed once the lock owning thread is ready for processing. | 09-19-2013 |
20130247061 | METHOD AND APPARATUS FOR THE SCHEDULING OF COMPUTING TASKS - Described herein are methods and related apparatus for the allocation of computing resources to perform computing tasks. The methods described herein may be used to allocate computing tasks to many different types of computing resources, such as processor cores, individual computers, and virtual machines. Characteristics of the available computing resources, as well as other aspects of the computing environment, are modeled in a multidimensional coordinate system. Each coordinate point in the coordinate system corresponds to a unique combination of attributes of the computing resources/computing environment, and each coordinate point is associated with a weight that indicates the relative desirability of the coordinate point. To allocate a computing resource to execute a task, the weights of the coordinate points, as well as other related factors, are analyzed. | 09-19-2013 |
20130247062 | VERIFYING SYNCHRONIZATION COVERAGE IN LOGIC CODE - A computer implemented system and method for measuring synchronization coverage for one or more concurrently executed threads is provided. The method comprises updating an identifier of a first thread to comprise an operation identifier associated with a first operation, in response to determining that the first thread has performed the first operation; associating the identifier of the first thread with one or more resources accessed by the first thread; and generating a synchronization coverage model by generating a relational data structure of said one or more resources, wherein a resource is associated with at least the identifier of the first thread and an identifier of a second thread identifier, such that the second thread waits for the first thread before accessing said resource. | 09-19-2013 |
20130247063 | COMPUTING DEVICE AND METHOD FOR MANAGING MEMORY OF VIRTUAL MACHINES - In a method for managing memory of virtual machines in a computing device, a user request for allocating a specified amount of memory of the computing device to a virtual machine is received. If the available memory of the computing device is less than the specified amount of memory, total idle memory of all the virtual machines in the computing device is calculated. If the total idle memory is less than the specified amount of memory, an average release memory of the virtual machines in the computing device is calculated. The idle memory of the virtual machines is released according to the average release memory. | 09-19-2013 |
20130247064 | SYSTEM AND METHOD OF CO-ALLOCATING A RESERVATION SPANNING DIFFERENT COMPUTE RESOURCES TYPES - Co-allocating resources within a compute environment includes. Receiving a request for a reservation for a first type of resource, analyzing constraints and guarantees associated with the first type of resource, identifying a first group of resources that meet the request for the first type of resource and storing in a first list, receiving a request for a reservation for a second type of resource, analyzing constraints and guarantees associated with the second type of resource, identifying a second group of resources that meet the request for the second type of resource and storing in a second list, calculating a co-allocation parameter between the first group of resources and the second group of resources and reserving resources according to the calculated co-allocation parameter of the first group of resources and the second group of resources. The request may also request exclusivity of the reservation. | 09-19-2013 |
20130247065 | APPARATUS AND METHOD FOR EXECUTING MULTI-OPERATING SYSTEMS - An apparatus and method for executing multi-operating systems (OS) are provided. Resources allocated to the respective multi-OSs are managed by management applications of the multi-OSs. A processor executes a plurality of multi-OSs. Each of the plurality of multi-OSs executes the management application. Each of the plurality of multi-OSs regards a resource held by another multi-OS among the plurality of multi-OSs as used by the corresponding management application, thereby preventing the resource from being allocated to another application included in the multi-OS. | 09-19-2013 |
20130247066 | Process Scheduler Employing Adaptive Partitioning of Process Threads - A system includes a processor and memory storage units storing software code. The software code comprises code for a scheduling system and for generating a plurality of adaptive partitions that are each associated with one or more process threads and that each have a corresponding processor budget. The code also is executable to, when the system is under a normal load, allocate the processor to one of the threads that is in a ready state and has the highest priority among the process threads that are in a ready state. The code is also executable to, when the system is in overload, allocate the processor to one of the process threads that is in a ready state and has the highest priority among the process threads that are in a ready state and for which the adaptive partition that the process thread is associated with has available guaranteed processor budget. | 09-19-2013 |
20130254776 | METHOD TO REDUCE QUEUE SYNCHRONIZATION OF MULTIPLE WORK ITEMS IN A SYSTEM WITH HIGH MEMORY LATENCY BETWEEN PROCESSING NODES - A method efficiently dispatches/completes a work element within a multi-node, data processing system that has a global command queue (GCQ) and at least one high latency node. The method comprises: at the high latency processor node, work scheduling logic establishing a local command/work queue (LCQ) in which multiple work items for execution by local processing units can be staged prior to execution; a first local processing unit retrieving via a work request a larger chunk size of work than can be completed in a normal work completion/execution cycle by the local processing unit; storing the larger chunk size of work retrieved in a local command/work queue (LCQ); enabling the first local processing unit to locally schedule and complete portions of the work stored within the LCQ; and transmitting a next work request to the GCQ only when all the work within the LCQ has been dispatched by the local processing units. | 09-26-2013 |
20130254777 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node. | 09-26-2013 |
20130263148 | MANAGING A SET OF RESOURCES - In one example, a controller for managing a set of resources. A first structure has a first entry statically associated with one of the resources. A second structure has a second entry dynamically associative with one of the resources. A resource sharing mechanism borrows for the second structure an idle resource associated with the first structure. | 10-03-2013 |
20130263149 | Dynamically Adjusting Global Heap Allocation in Multi-Thread Environment - Global heap allocation technologies in a multi-thread environment, and particularly to a method and system for dynamically adjusting global heap allocation in the multi-thread environment, and more particularly to a method and system for dynamically adjusting global heap allocation by monitoring conflict parameters of the global heap allocation method. The present invention provides a method of dynamically adjusting global heap allocation in multi-thread environment, comprising: identifying a global heap allocation method in an application program; judging whether the global heap allocation method is a multi-thread conflict hot point; and using a local stack to allocate memory space requested by the global heap allocation method in response to a “yes” judging result. The method according to the present invention is adapted to purposefully dynamically adjust the intrinsic global heap allocation method in the program according to a real-time running state, reduce the lock contention on the global heap, and effectively improve a resource allocating efficiency and a resource utilization rate. | 10-03-2013 |
20130263150 | AUTOMATED ALLOCATION OF RESOURCES TO FUNCTIONAL AREAS OF AN ENTERPRISE ACTIVITY ENVIRONMENT - A computer implemented method, system and/or computer program product automatically allocates resources to functional areas of an enterprise activity environment. A skill level of a resource is determined for multiple functional areas. An affinity index is created and associated with each of the multiple functional areas, wherein the affinity index is based on a level of productivity drop of other resources in a specific functional area if the resource is assigned to another functional area. Expected resource and skill level requirements of a project are identified. The resource is automatically allocated to one or more functional areas based on the affinity index associated with a particular functional area in view of the expected resource and skill level requirements. | 10-03-2013 |
20130268940 | AUTOMATING WORKLOAD VIRTUALIZATION - A system, and a corresponding method enabled by and implemented on that system, automatically calculates and compares costs for hosting workloads in virtualized or non-virtualized platforms. The system allows a service user (i.e., a customer) to decide how best to have workloads hosted by apportioning costs that are least sensitive to workload placement decisions and by providing robust and repeatable cost estimates. The system compares the costs of hosting a workload in virtualized and non-virtualized environments; separates workloads into categories including those that should be virtualized and those that should not, and determines the amount of physical resources to cost-effectively host a set of workloads. | 10-10-2013 |
20130268941 | DETERMINING AN ALLOCATION OF RESOURCES TO ASSIGN TO JOBS OF A PROGRAM - A performance model is used to calculate a performance parameter based on characteristics of a collection of jobs that make up a program, a number of map tasks in the jobs, a number of reduce tasks in the jobs, and an allocation of resources, where the jobs include the map tasks and the reduce tasks, the map tasks producing intermediate results based on segments of input data, and the reduce tasks producing an output based on the intermediate results. Using a value of the performance parameter calculated by the performance model, a particular allocation of resources is determined to assign to the jobs of the program to meet a performance goal of the program. | 10-10-2013 |
20130268942 | METHODS AND APPARATUS FOR AUTO-THROTTLING ENCAPSULATED COMPUTE TASKS - Systems and methods for auto-throttling encapsulated compute tasks. A device driver may configure a parallel processor to execute compute tasks in a number of discrete throttled modes. The device driver may also allocate memory to a plurality of different processing units in a non-throttled mode. The device driver may also allocate memory to a subset of the plurality of processing units in each of the throttling modes. Data structures defined for each task include a flag that instructs the processing unit whether the task may be executed in the non-throttled mode or in the throttled mode. A work distribution unit monitors each of the tasks scheduled to run on the plurality of processing units and determines whether the processor should be configured to run in the throttled mode or in the non-throttled mode. | 10-10-2013 |
20130268943 | BALANCED PROCESSING USING HETEROGENEOUS CORES - Technologies are generally described for a multi-processor core and a method for transferring threads in a multi-processor core. In an example, a multi-core processor may include a first group including a first core and a second core. A first sum of the operating frequencies of the cores in the first group corresponds to a first total operating frequency. The multi-core processor may further include a second group including a third core. A second sum of the operating frequencies of the cores in the second group may correspond to a second total operating frequency that is substantially the same as the first total operating frequency. A hardware controller may be configured in communication with the first, second and third core. A memory may be configured in communication with the hardware controller and may include an indication of at least the first group and the second group. | 10-10-2013 |
20130268944 | Dynamically Building Application Environments in a Computational Grid - Computing environments within a grid computing system are dynamically built in response to specific job resource requirements from a grid resource allocator, including activating needed hardware, provisioning operating systems, application programs, and software drivers. Optimally, prior to building a computing environment for a particular job, cost/revenue analysis is performed, and if operational objectives would not be met by building the environment and executing the job, a job sell-off process is initiated. | 10-10-2013 |
20130275989 | CONTROLLER FOR MANAGING A RESET OF A SUBSET OF THREADS IN A MULTI-THREAD SYSTEM - An integrated circuit device includes a processor core, and a controller. The processor core issues a command intended for a first thread of a plurality of threads. The controller initiates de-allocates hardware resources of the controller that are allocated to the first thread during a thread reset process for the first thread, returns a specified value to the processor core in response to the first command intended for the first thread during the thread reset process, drops responses intended for the first thread from other devices during the thread reset process, completes the thread reset process in response to a determination that all expected responses intended for the first thread have been either received or dropped, and continues to issue requests to other devices in response to commands from other threads of the plurality of threads and processing corresponding responses during the thread reset process. | 10-17-2013 |
20130275990 | ALLOCATING OPTIMIZED RESOURCES FOR COMPONENTS BASED ON DERIVED COMPONENT PROFILES - Systems, methods and techniques relating to publishing mobile applications are described. A described technique includes identifying, at a second component container contained in a first component container, a first component container profile associated with the first component container, translating at least a portion of the first component container profile to a second component container profile associated with the second component container, and initializing the second component container based, at least in part, on the second component container profile. | 10-17-2013 |
20130275991 | APPARATUS AND METHOD FOR ALLOCATING TASKS IN A NODE OF A TELECOMMUNICATION NETWORK - A method of allocating tasks in a node of a telecommunication network, wherein the node comprises a main processing unit which is configured to process tasks in association with one or more of a plurality of peripheral processing units, the peripheral processing units arranged in a hierarchical tree topology comprising one or more branches at one or more hierarchical levels. The method comprises the steps of: receiving a request to process a task; determining a temperature status of branches in the hierarchical tree topology, wherein the temperature status of a branch is related to the temperature of a processing unit coupled to the branch; and allocating the task to one or more processing units, based on the temperature status of the branches in the hierarchical tree topology. | 10-17-2013 |
20130275992 | DISTRIBUTED PROCESSING SYSTEM, DISTRIBUTED PROCESSING METHOD, AND DISTRIBUTED PROCESSING PROGRAM - The present invention includes application execution units ( | 10-17-2013 |
20130275993 | SYSTEM AND METHOD FOR DYNAMIC RESCHEDULING OF MULTIPLE VARYING RESOURCES WITH USER SOCIAL MAPPING - A system and method for scheduling resources includes a memory storage device having a resource data structure stored therein which is configured to store a collection of available resources, time slots for employing the resources, dependencies between the available resources and social map information. A processing system is configured to set up a communication channel between users, between a resource owner and a user or between resource owners to schedule users in the time slots for the available resources. The processing system employs social mapping information of the users or owners to assist in filtering the users and owners and initiating negotiations for the available resources. | 10-17-2013 |
20130283286 | APPARATUS AND METHOD FOR RESOURCE ALLOCATION IN CLUSTERED COMPUTING ENVIRONMENT - An apparatus for resource allocation in a clustered computing environment includes: a node search unit configured to search for a node corresponding to necessary resources required for running a job requested by a user, within an available resource group of the clustered computing environment; a node existence determination unit configured to determine whether or not there exists a node having the necessary resources available; and a resource changing unit configured to change at least one of the necessary resources to alternative resources based on a preset priority and then allocate the alternative resource, when it is determined that there is no node having the necessary resources available. | 10-24-2013 |
20130283287 | GENERATING MONOTONE HASH PREFERENCES - Selecting a resource to fulfill a resource requirement is disclosed. For each resource requirement, a resource-specific affinity value is computed with respect to each of a plurality of resources. A bias is applied to each of at least a subset of the resource-specific affinity values. The biased, as applicable, resource-specific affinity values are sorted into a resource preference list. The sorted preference list is used to select a resource to fulfill the resource requirement. | 10-24-2013 |
20130283288 | SYSTEM RESOURCE CONSERVING METHOD AND OPERATING SYSTEM THEREOF - A system resource conserving method for managing an application process executing on an electronic device, wherein the electronic device has a combination of system resources, the method comprising: (A) executing a central management process for managing utilization of the system resource by the application processes; (B) receiving by the central management process a task completion message from one of the application processes; and (C) selectively transmitting by the central management process a terminate message or a suspend message to the application process according to the task completion message in order to terminate or suspend the execution of the application process such that the application process stops using the system resources. | 10-24-2013 |
20130290976 | SCHEDULING MAPREDUCE JOB SETS - Determining a schedule of a batch workload of MapReduce jobs is disclosed. A set of multi-stage jobs for processing in a MapReduce framework is received, for example, in a master node. Each multi-stage job includes a duration attribute, and each duration attribute includes a stage duration and a stage type. The MapReduce framework is separated into a plurality of resource pools. The multi-stage jobs are separated into a plurality of subgroups corresponding with the plurality of pools. Each subgroup is configured for concurrent processing in the MapReduce framework. The multi-stage jobs in each of the plurality of subgroups are placed in an order according to increasing stage duration. For each pool, the multi-stage jobs in increasing order of stage duration are sequentially assigned from either a front of the schedule or a tail of the schedule by stage type. | 10-31-2013 |
20130290977 | Computational Resource Allocation System And A Method For Allocating Computational Resources For Executing A Scene Graph Based Application At Run Time - A computational resource allocation system for allocating computational resources to various modules of a scene graph based application while the application is being executed may include a module mapper for receiving a set of modules to be used in the scene graph based application from a module repository and a set of computational resources available to process the modules from a computational resource repository, and mapping the set of modules onto the set of computational resources to generate a mapping, and an allocation manager configured to allocate the modules to the set of computational resources based on the mapping. | 10-31-2013 |
20130290978 | System Partitioning To Present Software As Platform Level Functionality - Embodiments of apparatuses, methods for partitioning systems, and partitionable and partitioned systems are disclosed. In one embodiment, a system includes processors and a partition manager. The partition manager is to allocate a subset of the processors to a first partition and another subset of the processors to a second partition. The first partition is to execute first operating system level software and the second partition is to execute second operating system level software. The first operating system level software is to manage the processors in the first partition as resources individually accessible to the first operating system level software, and the second operating system level software is to manage the processors in the second partition as resources individually accessible to the second operating system level software. The partition manager is also to present the second partition, including the second operating system level software, to the first operating system level software as platform level functionality embedded in the system. | 10-31-2013 |
20130290979 | DATA TRANSFER CONTROL METHOD OF PARALLEL DISTRIBUTED PROCESSING SYSTEM, PARALLEL DISTRIBUTED PROCESSING SYSTEM, AND RECORDING MEDIUM - A parallel distributed processing system includes multiple parallel distributed processing execution servers which stores data blocks pre-divided in a storage device and executes tasks processing the data blocks in parallel, and a management computer controlling the multiple parallel distributed processing execution servers. The management computer collects resource use amounts of the multiple parallel distributed processing execution servers, acquires states of data blocks and tasks held by the multiple parallel distributed processing execution servers, selects a second parallel distributed processing execution server transferring a data block to the first parallel distributed processing execution server, based on processing progress situations of the data blocks held by the multiple parallel distributed processing execution servers and the resource use amounts of the multiple parallel distributed processing execution servers, and transmits a command to transfer the data block to the first parallel distributed processing execution server, to the selected second parallel distributed processing execution server. | 10-31-2013 |
20130298133 | TECHNIQUE FOR COMPUTATIONAL NESTED PARALLELISM - One embodiment of the present invention sets forth a technique for performing nested kernel execution within a parallel processing subsystem. The technique involves enabling a parent thread to launch a nested child grid on the parallel processing subsystem, and enabling the parent thread to perform a thread synchronization barrier on the child grid for proper execution semantics between the parent thread and the child grid. This technique advantageously enables the parallel processing subsystem to perform a richer set of programming constructs, such as conditionally executed and nested operations and externally defined library functions without the additional complexity of CPU involvement. | 11-07-2013 |
20130298134 | System and Method for a Self-Optimizing Reservation in Time of Compute Resources - A system and method of dynamically controlling a reservation of resources within a cluster environment to maximize a response time are disclosed. The method embodiment of the invention comprises receiving from a requestor a request for a reservation of resources in the cluster environment, reserving a first group of resources, evaluating resources within the cluster environment to determine if the response time can be improved and if the response time can be improved, then canceling the reservation for the first group of resources and reserving a second group of resources to process the request at the improved response time. | 11-07-2013 |
20130298135 | Dynamically Allocating Multitier Applications Based Upon Application Requirements and Performance Reliability of Resources - The present disclosure relates to dynamically allocating multitier applications based upon performance and reliability of resources. A controller analyzes resources and applications hosted by the resources, and collects operational data relating to the applications and resources. The controller is configured to determine an allocation scheme for allocating or reallocating the applications upon failure of a resource and/or upon rollout or distribution of a new application. The controller generates configuration data that describes steps for implementing the allocation scheme. The resources are monitored, in some embodiments, by monitoring devices. The monitoring devices collect and report the operational information and generate alarms if resources fail. | 11-07-2013 |
20130298136 | MULTIPROCESSOR SYSTEM - A multiprocessor system includes plural processing parts configured to execute a program stored in a program memory; a common resource shared by the processing parts; a resource status table in which an occupation status of the common resource is written; a resource access table in which address areas are associated with occupation manners of the common resource on a function basis of the program stored in the program memory; and a controlling part configured to determine whether to permit execution of a function which involves occupation of the common resource by one of the processing parts using the resource status table and the resource access table. | 11-07-2013 |
20130305256 | Systems And Methods To Allocate Application Tasks To A Pool Of Processing Machines - Systems and methods are provided to allocate application tasks to a pool of processing machines. According to some embodiments, a requestor generates a scope request including an indication of a number of compute units to be reserved. The requestor also provides an application request associated with the scope. A subset of available processing machines may then be allocated to the scope, and the application request is divided into a number of different tasks. Each task may then be assigned to a processing machine that has been allocated to the application request. According to some embodiments, each task is associated with a deadline. Moreover, according to some embodiments an overall cost is determined and then allocated to the requestor based on the number of compute units that were reserved for the scope. | 11-14-2013 |
20130305257 | SCHEDULING METHOD AND SCHEDULING SYSTEM - A scheduling method is executed by a given CPU among multiple CPUs. The scheduling method includes subtracting for each of the CPUs, a number of processes assigned to the CPU from a maximum number of speculative processes that can be assigned to each of the CPUs; summing results yielded at the subtracting to yield a total number of speculative processes; and assigning to the CPUs, speculative processes of a number is less than or equal to the total number of speculative processes. | 11-14-2013 |
20130311999 | RESOURCE MANAGEMENT SUBSYSTEM THAT MAINTAINS FAIRNESS AND ORDER - One embodiment of the present disclosure sets forth an effective way to maintain fairness and order in the scheduling of common resource access requests related to replay operations. Specifically, a streaming multiprocessor (SM) includes a total order queue (TOQ) configured to schedule the access requests over one or more execution cycles. Access requests are allowed to make forward progress when needed common resources have been allocated to the request. Where multiple access requests require the same common resource, priority is given to the older access request. Access requests may be placed in a sleep state pending availability of certain common resources. Deadlock may be avoided by allowing an older access request to steal resources from a younger resource request. One advantage of the disclosed technique is that older common resource access requests are not repeatedly blocked from making forward progress by newer access requests. | 11-21-2013 |
20130312000 | ORCHESTRATING COMPETING ACTIVITIES FOR SCHEDULING ACTIONS OF MULTIPLE NODES IN A DISTRIBUTED ENVIRONMENT - Automatic programming, scheduling, and control of planned activities at “worker nodes” in a distributed environment are provided by a “real-time self tuner” (RTST). The RTST provides self-tuning of controlled interoperation among an interconnected set of distributed components (i.e., worker nodes) including, for example, home appliances, security systems, lighting, sensor networks, medical electronic devices, wearable computers, robotics, industrial controls, wireless communication systems, audio nets, distributed computers, toys, games, etc. The RTST acts as a centralized “planner” that is either one of the nodes or a dedicated computing device. A set of protocols allow applications to communicate with the nodes, and allow one or more nodes to communicate with each other. Self-tuning of the interoperation and scheduling of tasks to be performed at each node uses an on-line sampling driven statistical model and predefined node “behavior patterns” to predict and manage resource requirements needed by each node for completing assigned tasks. | 11-21-2013 |
20130312001 | TASK ALLOCATION OPTIMIZATION SYSTEM, TASK ALLOCATION OPTIMIZATION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING TASK ALLOCATION OPTIMIZATION PROGRAM - A state evaluation function value generation unit | 11-21-2013 |
20130312002 | SCHEDULING METHOD AND SCHEDULING SYSTEM - A scheduling method executed by a scheduler that manages multiple processors, includes detecting based on an application information table when a first application is started up, a processor that executes a second application that is not executed concurrently with the first application; and assigning the first application to the processor. | 11-21-2013 |
20130312003 | METHOD AND SYSTEM FOR DYNAMICALLY PARALLELIZING APPLICATION PROGRAM - Provided is a method and system for dynamically parallelizing an application program. Specifically, provided is a method and system having multi-core control that may verify a number of available threads according to an application program and dynamically parallelize data based on the verified number of available threads. The method and system for dynamically parallelizing the application program may divide a data block to be processed according to the application program based on a relevant data characteristic and dynamically map the threads to division blocks, and thereby enhance a system performance. | 11-21-2013 |
20130312004 | DISTRIBUTED SYSTEM, DEVICE, METHOD, AND PROGRAM - A distributed system includes: a plurality of ordinary nodes provided with reduced-power states having different times of recovery to a normal operating state; and a management node for assigning a job to an ordinary node for carrying out the job. The management node has: node select means for selecting an ordinary node from ordinary nodes each put in one of the reduced-power states, assigning a job to the selected ordinary node and driving the selected ordinary node to carry out the assigned job; and node control means for executing control to restore an ordinary node selected by the node select means to the normal operating state. The node select means selects an ordinary node from the ordinary nodes each put in one of the reduced-power states having different times of recovery to the normal operating state in accordance with an ordinary-node order starting with an ordinary node existing in a reduced-power state and having a short time of recovery to the normal operating state. | 11-21-2013 |
20130318534 | METHOD AND SYSTEM FOR LEVERAGING PERFORMANCE OF RESOURCE AGGRESSIVE APPLICATIONS - A simultaneous multithreading computing system obtains process information for the simultaneous multithreading computing system. The process information comprises a plurality of processes associated with the simultaneous multithreading computing system. The simultaneous multithreading computing system obtains resource information for the simultaneous multithreading computing system. The resource information comprises a plurality of available resources in the simultaneous multithreading system. The simultaneous multithreading computing system determines that a process from the plurality of processes is unscalable on the simultaneous multithreading computing system. Upon determining that the process is unscalable, the simultaneous multithreading computing system selects a resource to execute the unscalable process based on the resource information. Upon determining that a sibling resource is associated with the selected resource, the simultaneous multithreading computing system disconnects the sibling resource. | 11-28-2013 |
20130318535 | PRIMARY-BACKUP BASED FAULT TOLERANT METHOD FOR MULTIPROCESSOR SYSTEMS - A method of fault tolerance in a multiprocessor system based on primary-backup scheme includes: receiving a task to be allocated to a processor in a multiprocessor system; allocating a primary version of the task according to a normal real-time scheduling algorithm; checking validity of the allocation of the primary version of the task; allocating a backup version of the task with overloading; and checking validity of the allocation of the backup version of the task. | 11-28-2013 |
20130318536 | DYNAMIC SCHEDULING OF TASKS FOR COLLECTING AND PROCESSING DATA FROM EXTERNAL SOURCES - A scheduler manages execution of a plurality of data-collection jobs, assigns individual jobs to specific forwarders in a set of forwarders, and generates and transmits tokens (e.g., pairs of data-collection tasks and target sources) to assigned forwarders. The forwarder uses the tokens, along with stored information applicable across jobs, to collect data from the target source and forward it onto an indexer for processing. For example, the indexer can then break a data stream into discrete events, extract a timestamp from each event and index (e.g., store) the event based on the timestamp. The scheduler can monitor forwarders' job performance, such that it can use the performance to influence subsequent job assignments. Thus, data-collection jobs can be efficiently assigned to and executed by a group of forwarders, where the group can potentially be diverse and dynamic in size. | 11-28-2013 |
20130318537 | PREVENTING UNNECESSARY CONTEXT SWITCHING BY EMPLOYING AN INDICATOR ASSOCIATED WITH A LOCK ON A RESOURCE - A method of avoiding unnecessary context switching in a multithreaded environment. A thread of execution of a process waiting on a lock protecting access to a shared resource may wait for the lock to be released by executing in a loop, or “spin”. The waiting thread may continuously check, in a user mode of an operating system, an indicator of whether the lock has been released. After a certain time period, the thread may stop spinning and enter a kernel mode of the operating system. Subsequently, before going to sleep which entails costly context switching, the thread may perform an additional check of the indicator to determine whether the lock has been released. If this is the case, the thread returns to user mode and the unnecessary context switching is avoided. | 11-28-2013 |
20130318538 | ESTIMATING A PERFORMANCE CHARACTERISTIC OF A JOB USING A PERFORMANCE MODEL - A job profile is received ( | 11-28-2013 |
20130326531 | AUTOMATICALLY IDENTIFYING CRITICAL RESOURCES OF AN ORGANIZATION - A method and associated systems for automatically identifying critical resources in an organization. An organization creates a model of the dependencies between pairs of resource instances, wherein that model describes how the organization's projects and services are affected when a resource instance becomes unavailable. This model may be represented as a system of directed graphs. This model may be used to automatically identify a resource instance as “critical” when excessive cost is required to resume all projects and services rendered infeasible by the disruption of that resource instance. This model may also be used to automatically identify a resource instance as “critical for a resource type” when disruption of the resource instance forces the capacity of the resource type available to the entire organization to fall below a threshold value. | 12-05-2013 |
20130326532 | PARALLEL ALLOCATION OPTIMIZATION DEVICE, PARALLEL ALLOCATION OPTIMIZATION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A parallel allocation calculating unit calculates a parallel allocation candidate which is an element candidate in target data allocated per processing performed in parallel. A parallel calculation amount estimation processing unit estimates the calculation amount required for parallel processing when a parallel allocation candidate is allocated, based on a nonzero element count in the target data. An optimality decision processing unit decides whether or not the parallel allocation candidate is optimal based on the calculated calculation amount, and allocates the optimal element per processing performed in parallel. | 12-05-2013 |
20130326533 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus executes an application program including an application resource and a runtime. The information processing apparatus includes a memory, and a processor that executes a procedure in the memory. The procedure includes generating a process space in the memory to invoke the application program, loading the runtime into the process space, loading the application resource into the process space into which the runtime is loaded, generating a process of the application program based on the application resource and the runtime which are loaded into the process space, and executing the process of the application program. | 12-05-2013 |
20130326534 | SYSTEM AND METHOD FOR SHARED EXECUTION OF MIXED DATA FLOWS - A method, computer program product, and computer system for shared execution of mixed data flows, performed by one or more computing devices, comprises identifying one or more resource sharing opportunities across a plurality of parallel tasks. The plurality of parallel tasks includes zero or more relational operations and at least one non-relational operation. The plurality of parallel tasks relative to the relational operations and the at least one non-relational operation are executed. In response to executing the plurality of parallel tasks, one or more resources of the identified resource sharing opportunities is shared across the relational operations and the at least one non-relational operation. | 12-05-2013 |
20130326535 | STORAGE MEDIUM, INFORMATION PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD - A non-transitory computer-readable recording medium storing a program causing a processor to execute a process, the process includes detecting input of data into a memory to which data is inputted in sequence, the data being a processing object of first processing; allocating the first processing, of which a processing object is the data, with respect to any node in a communicable node group; determining whether or not the data is provided with tail information, the tail information indicating tail data of a series of data that are processing objects of the first processing, when detecting input of the data; and allocating second processing, of which a processing object is a processing result of the first processing that is executed with respect to each piece of data of the series of data, to any node of the node group when determining that the data is provided with the tail information. | 12-05-2013 |
20130332935 | SYSTEM AND METHOD FOR COMPUTING - A method for analyzing data is disclosed that includes receiving an analysis request to analyze selected data corresponding to one or more monitored assets, wherein the analysis request includes one or more parameters corresponding to performance categories of computing resources for processing the analysis request; determining a computing resource allocation plan for processing the analysis request based on the one or more parameters; and processing the analysis request using the determined computing resource allocation plan to provide analysis results. Also disclosed is an analytic router that includes a mapper, an estimator, an optimizer, and a resource provisioner. | 12-12-2013 |
20130332936 | Resource Management with Dynamic Resource Budgeting - A method for resource management of a data processing system is described. According to one embodiment, a request is received via a programming interface from a program to modify a resource budget assigned to the program, where the resource budget specifies an amount of resources of the data processing system the program can utilize during an execution of the program. It is determined whether the program is entitled to modify the resource budget based on entitlement associated with the program. The resource budget for the program is modified if it is determined the program is entitled to modify the resource budget and the modified resource budget is enforced against the program during the execution of the program. | 12-12-2013 |
20130332937 | Heterogeneous Parallel Primitives Programming Model - With the success of programming models such as OpenCL and CUDA, heterogeneous computing platforms are becoming mainstream. However, these heterogeneous systems are low-level, not composable, and their behavior is often implementation defined even for standardized programming models. In contrast, the method and system embodiments for the heterogeneous parallel primitives (HPP) programming model disclosed herein provide a flexible and composable programming platform that guarantees behavior even in the case of developing high-performance code. | 12-12-2013 |
20130339971 | System and Method for Improved Job Processing to Reduce Contention for Shared Resources - A method of processing a job is presented. A packet selector determines a candidate job list including an ordered listing of candidate jobs. Each candidate job in the ordered listing belongs to a communication stream. One or more shared resources required for execution of a first job in the candidate job list are identified. Whether the first job is eligible for execution is determined by determining an availability of the one or more shared resources required for the first job, and, when the one or more shared resource required for the first job are unavailable and no jobs executing within the data processor are from the same communication stream as the first job, determining that the first job is not eligible for execution. | 12-19-2013 |
20130339972 | DETERMINING AN ALLOCATION OF RESOURCES TO A PROGRAM HAVING CONCURRENT JOBS - A performance model for a collection of jobs that make up a program is used to calculate a performance parameter based on a number of map tasks in the jobs, a number of reduce tasks in the jobs, and an allocation of resources, where the jobs include the map tasks and the reduce tasks, the map tasks producing intermediate results based on segments of input data, and the reduce tasks producing an output based on the intermediate results. The performance model considers overlap of concurrent jobs. Using a value of the performance parameter calculated by the performance model, a particular allocation of resources is determined to assign to the jobs of the program to meet a performance goal of the program. | 12-19-2013 |
20130339973 | FINDING RESOURCE BOTTLENECKS WITH LOW-FREQUENCY SAMPLED DATA - A computer program product for automatically gauging a benefit of a tuning action. The computer program product including a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code including computer readable program code configured to collect a plurality of observations of a running state of a plurality of threads in a computer system. Computer readable program code configured to identify a plurality of resources of the computer system and a capacity of each resource of the plurality of resources. Computer readable program code configured to map an observation of the running state of each thread of the plurality of threads to a resource that the observation of each thread uses, respectively, and computer readable program code configured to apply the tuning action to a first resource of the plurality of resources to determine an impact on the performance of the computer system. | 12-19-2013 |
20130339974 | FINDING RESOURCE BOTTLENECKS WITH LOW-FREQUENCY SAMPLED DATA - A method for automatically gauging a benefit of a tuning action. The method including collecting a plurality of observations of a running state of a plurality of threads in a computer system, as executed by a processing in a computer system. Identifying a plurality of resources the computer system and a capacity of each resource of the plurality of resources. Mapping an observation of the running state of each thread of the plurality of threads to a resource that the observation of each thread uses, respectively. Applying the tuning action to a first resource of the plurality of resources to determine an impact on the performance of the computer system. | 12-19-2013 |
20130339975 | MANAGEMENT OF SHARED TRANSACTIONAL RESOURCES - Embodiments relate to management of shared transactional resources. A system includes a transactional facility configured to support transactions that effectively delay committing stores to memory or results to an architectural state until transaction completion. The system includes a processor configured to perform an allocation or arbitration of processing resources to instructions of a transaction within a thread. The processor detects that the transaction has exceeded a manageable capacity of a resource or a potential collision of a transactional instruction storage access has occurred, resulting in a transaction abort. A transaction abort reason and a current configuration are examined to determine whether the transaction abort was based on an initiating program exceeding a restricted limit on the manageable capacity of the resource or an allocation. A processor state is updated to increase a likelihood of success upon retrying the transaction. | 12-19-2013 |
20130339976 | Resource Management System for Automation Installations - A method for managing resources of a processor device configured to control an automation installation includes using at least one first operating system and at least one second operating system, which preferably differs from the first operating system, to operate the processor device. The processor device includes at least two processor cores configured to operate the operating systems. The method further includes using at least one processor core to operate each operating system and freely selecting a number of processor cores used to operate the first operating system and a number of processor cores used to operate the second operating system. | 12-19-2013 |
20130346994 | JOB DISTRIBUTION WITHIN A GRID ENVIRONMENT - According to one aspect of the present disclosure, a method and technique for job distribution within a grid environment is disclosed. The method includes: receiving jobs at a submission cluster for distribution of the jobs to at least one of a plurality of execution clusters, each execution cluster comprising one or more execution hosts; determining resource capacity corresponding to each execution cluster; determining resource requirements for the jobs; dynamically determining a pending job queue length for each execution cluster based on the resource capacity of the respective execution clusters and the resource requirements of the jobs; and forwarding jobs to the respective execution clusters according the determined pending job queue length for the respective execution cluster. | 12-26-2013 |
20130346995 | System and Method for Enforcing Future Policies in a Compute Environment - A disclosed system receives a request for resources, generates a credential map for each credential associated with the request, the credential map including a first type of resource mapping and a second type of resource mapping. The system generates a resource availability map, generates a first composite intersecting map that intersects the resource availability map with a first type of resource mapping of all the generated credential maps and generates a second composite intersecting map that intersects the resource availability map and a second type of resource mapping of all the generated credential maps. With the first and second composite intersecting maps, the system can allocate resources within the compute environment for the request based on at least one of the first composite intersecting map and the second composite intersecting map. | 12-26-2013 |
20130346996 | PROBABILISTIC OPTIMIZATION OF RESOURCE DISCOVERY, RESERVATION AND ASSIGNMENT - A processor-implemented method, system and/or computer program product allocates multiple resources from multiple organizations. A series of requests for multiple resources from multiple organizations is received. The multiple resources are required to accomplish a specific task, and each of the multiple resources is assigned a probability of consumption. Probabilities of availability of the multiple resources are then determined and transmitted to the organizations. | 12-26-2013 |
20130346997 | MECHANISM OF SUPPORTING SUB-COMMUNICATOR COLLECTIVES WITH O(64) COUNTERS AS OPPOSED TO ONE COUNTER FOR EACH SUB-COMMUNICATOR - A system and method for enhancing barrier collective synchronization on a computer system comprises a computer system including a data storage device. The computer system includes a program stored in the data storage device and steps of the program being executed by a processor. The system includes providing a plurality of communicators for storing state information for a barrier algorithm. Each communicator designates a master core in a multi-processor environment of the computer system. The system allocates or designates one counter for each of a plurality of threads. The system configures a table with a number of entries equal to the maximum number of threads. The system sets a table entry with an ID associated with a communicator when a process thread initiates a collective. The system determines an allocated or designated counter by searching entries in the table. | 12-26-2013 |
20140007124 | Computing Processor Resources for Logical Partition Migration | 01-02-2014 |
20140007125 | Auto Detecting Shared Libraries and Creating A Virtual Scope Repository | 01-02-2014 |
20140007126 | METHOD AND DEVICE FOR ALLOCATING BROWSER PROCESS | 01-02-2014 |
20140007127 | PROJECT MANAGEMENT SYSTEM AND METHOD | 01-02-2014 |
20140007128 | PERFORMING A TASK IN A SYSTEM HAVING DIFFERENT TYPES OF HARDWARE RESOURCES | 01-02-2014 |
20140007129 | METHOD, APPARATUS AND SYSTEM FOR RESOURCE MIGRATION | 01-02-2014 |
20140007130 | DETERMINING AN OPTIMAL COMPUTING ENVIRONMENT FOR RUNNING AN IMAGE | 01-02-2014 |
20140007131 | SCHEDULING METHOD AND SCHEDULING SYSTEM | 01-02-2014 |
20140013332 | METHOD AND APPARATUS FOR CONFIGURING RESOURCE - Embodiments of the present invention disclose a method for configuring a resource and an apparatus. The method includes: allocating a system resource to a currently active application sub-scenario in an application according to recorded system resource occupation information of the application sub-scenario of the application, where the system resource occupation information of the application sub-scenario of the application includes the system resource occupation information recorded when the application sub-scenario works in a process of testing the application after the application sub-scenario of the application is defined. With the present invention, the system resource is configured for the application sub-scenario at a single attempt. Therefore, enough system resources are ensured to meet the requirements for running the currently active application sub-scenario of the application, the running performance is ensured, and the adjustment time and the power consumption are saved. | 01-09-2014 |
20140019988 | SUPPORT OF NON-TRIVIAL SCHEDULING POLICIES ALONG WITH TOPOLOGICAL PROPERTIES - A system comprises a scheduling unit for scheduling jobs to resources, and a library unit comprising a machine map of the system and a global status map of interconnections of resources. A monitoring unit generates status information signals for the resources. The library unit receives the signals and determines a free map of resources to execute the job to be scheduled, the free map indicating the interconnection of resources to which the job in a current scheduling cycle can be scheduled and determined by removing from the machine map resources which fall within the global status map and re-introducing resources in the global status map which the scheduling unit has indicated the job being scheduled can be scheduled to. The monitoring unit dispatches a job to the resources in the free map which match the resource mapping requirements of the job and fall within the free map. | 01-16-2014 |
20140026140 | METHOD AND APPARATUS FOR OPTIMIZING DOWNLOAD OPERATIONS - A method and apparatus for optimizing downloading operations is disclosed. The method comprises determining a condition for a download speed for a plurality of threads for a file to a computer, wherein each thread is used to download a portion of the file; evaluating a plurality of environmental factors on the computer, wherein evaluating is only performed when the download speed meets a given condition; and performing one of increasing, decreasing, and not changing a number of threads used to perform the download depending on the evaluated plurality of environmental factors. | 01-23-2014 |
20140026141 | RESOURCE MANAGEMENT IN A MULTICORE ARCHITECTURE - A resource management and task allocation controller for installation in a multicore processor having a plurality of interconnected processor elements providing resources for processing executable transactions, at least one of said elements being a master processing unit, the controller being adapted to communicate, when installed, with each of the processor elements including the master processing unit, and comprising control logic for allocating executable transactions within the multicore processor to particular processor elements in accordance with pre-defined allocation parameters. | 01-23-2014 |
20140026142 | Process Scheduling to Maximize Input Throughput - A schedule graph may be used to identify executable elements that consume data from a network interface or other input/output interface. The schedule graph may be traversed to identify a sequence or pipeline of executable elements that may be triggered from data received on the interface, then a process scheduler may cause those executable elements to be executed on available processors. A queue manager and a load manager may optimize the resources allocated to the executable elements to maximize the throughput for the input/output interface. Such as system may optimize processing for input or output of network connections, storage devices, or other input/output devices. | 01-23-2014 |
20140026143 | EXCLUSIVE ACCESS CONTROL METHOD AND COMPUTER PRODUCT - An exclusive access control method is executed by a computer having an operating system that when an excluded thread accesses a shared resource, executes a first exclusive access control process of prohibiting the excluded thread from attempting to access the shared resource until exclusive access control is released, the exclusive access control process being executed according to a number of attempts, by the excluded thread, to access the shared resources. The exclusive access control method includes counting by at least one second thread, including the excluded thread and different from a first thread, the number of attempts to access the shared resource, when the first thread executes a second exclusive access control process of allowing the excluded thread to attempt to access the shared resource until the excluded thread is permitted access; and storing to a memory area by the second thread, the counted number of attempts. | 01-23-2014 |
20140033218 | JOB PLACEMENT BASED ON MODELING OF JOB SLOTS - A collection of job slots correspond to placement of observed jobs associated with a plurality of job categories in a data processing environment. An incoming job is received, and based on a job category of the incoming job, the incoming job is assigned to a particular one of the job slots to perform placement of the incoming job on physical resources. | 01-30-2014 |
20140033219 | METHOD, APPARATUS AND COMPUTER FOR LOADING RESOURCE FILE FOR GAME ENGINE - A method for loading a resource file for a game engine is provided. The method includes: activating a thread to a preload a predetermined resource file, wherein the predetermined resource file includes a texture resource file, and one or both of a structure resource file and a model resource file; and accessing and loading one or both of the structure resource file and the model resource file through memory mapping. The provided method increases a loading speed while loading a game resource file and fully utilizes computer resources. | 01-30-2014 |
20140033220 | PROCESS GROUPING FOR IMPROVED CACHE AND MEMORY AFFINITY - Embodiments include determining a set of two or more processes that share at least one of a plurality of resources in a multi-node system in which the processes are running, wherein each of the set of two or more processes is running one different nodes of the multi-node system. For each combination of the set of processes and the resources, a value is calculated based, at least in part, on a weight of the resource and frequency of access of the resource by each process of the set of processes. The pair of processes having a greatest sum of calculated values by resource is determined. A first process of the pair of processes is allocated from a first node in the multi-node system to a second node in the multi-node system that hoses a second process of the pair of processes. | 01-30-2014 |
20140033221 | PROCESSOR SCHEDULING METHOD AND SYSTEM USING DOMAINS - Aspects of the present invention concern a method and system for scheduling a request for execution on multiple processors. This scheduler divides processes from the request into a set of domains. Instructions in the same domain are capable of executing the instructions associated with the request in a serial manner on a processor without conflicts. A relative processor utilization for each domain in the set of the domains is based upon a workload corresponding to an execution of the request. If there are processors available then the present invention provisions a subset of available processors to fulfill an aggregate processor utilization. The aggregate processor utilization is created from a combination of the relative processor utilization associated with each domain in the set of domains. If processors are not needed then some processors may be shut down. Shutting down processors in accordance with the schedule saves energy without sacrificing performing. | 01-30-2014 |
20140040907 | RESOURCE ASSIGNMENT IN A HYBRID SYSTEM - A system processing an application in a hybrid system includes a database comprising a plurality of libraries, each library comprising sub-program components, wherein two or more of the components are combined by an end user into a stream flow defining an application. The system also includes a plurality of resources configured to process the stream flow, architecture of at least one of the plurality of resources being different from architecture of another of the plurality of resources. The system also includes a compiler configured to generate a resource assignment assigning the plurality of resources to the two or more of the components in the stream flow, at least two of the two or more of the components in the stream flow sharing at least one of the plurality of resources according to the resource assignment. | 02-06-2014 |
20140040908 | RESOURCE ASSIGNMENT IN A HYBRID SYSTEM - A system processing an application in a hybrid system includes a database comprising a plurality of libraries, each library comprising sub-program components, wherein two or more of the components are combined by an end user into a stream flow defining an application. The system also includes a plurality of resources configured to process the stream flow, architecture of at least one of the plurality of resources being different from architecture of another of the plurality of resources. The system also includes a compiler configured to generate a resource assignment assigning the plurality of resources to the two or more of the components in the stream flow, at least two of the two or more of the components in the stream flow sharing at least one of the plurality of resources according to the resource assignment. | 02-06-2014 |
20140040909 | DATA PROCESSING SYSTEMS - A data processing system is described in which a plurality of data processing units | 02-06-2014 |
20140040910 | INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF - Each of a plurality of circuit blocks includes a plurality of arithmetic elements. A power supply controller individually controls power supply to the plurality of circuit blocks. A resource management unit acquires first information regarding an arithmetic element necessary for an arithmetic process, and second information regarding an arithmetic element included in a circuit block which is supplied with power. Based on the first information and the second information, the resource management unit preferentially assigns, to the arithmetic element included in the circuit block which is supplied with power, a process for implementing the arithmetic process. | 02-06-2014 |
20140040911 | DYNAMIC JOB PROCESSING BASED ON ESTIMATED COMPLETION TIME AND SPECIFIED TOLERANCE TIME - The invention provides a system and method for managing clusters of parallel processors for use by groups and individuals requiring supercomputer level computational power. A Beowulf cluster provides supercomputer level processing power. Unlike a traditional Beowulf cluster; however, cluster size in not singular or static. As jobs are received from users/customers, a Resource Management System (RMS) dynamically configures and reconfigures the available nodes in the system into clusters of the appropriate sizes to process the jobs. Depending on the overall size of the system, many users may have simultaneous access to supercomputer level computational processing. Users are preferably billed based on the time for completion with faster times demanding higher fees. | 02-06-2014 |
20140040912 | SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes. | 02-06-2014 |
20140040913 | JOB PLAN VERIFICATION - A job plan verification system ( | 02-06-2014 |
20140047450 | Utilizing A Kernel Administration Hardware Thread Of A Multi-Threaded, Multi-Core Compute Node Of A Parallel Computer - Methods, apparatuses, and computer program products for utilizing a kernel administration hardware thread of a multi-threaded, multi-core compute node of a parallel computer are provided. Embodiments include a kernel assigning a memory space of a hardware thread of an application processing core to a kernel administration hardware thread of a kernel processing core. A kernel administration hardware thread is configured to advance the hardware thread to a next memory space associated with the hardware thread in response to the assignment of the kernel administration hardware thread to the memory space of the hardware thread. Embodiments also include the kernel administration hardware thread executing an instruction within the assigned memory space. | 02-13-2014 |
20140047451 | Optimizing Collective Communications Within A Parallel Computer - Methods, apparatuses, and computer program products for optimizing collective communications within a parallel computer comprising a plurality of hardware threads for executing software threads of a parallel application are provided. Embodiments include a processor of a parallel computer determining for each software thread, an affinity of the software thread to a particular hardware thread. Each affinity indicates an assignment of a software thread to a particular hardware thread. The processor also generates one or more affinity domains based on the affinities of the software threads. Embodiments also include a processor generating, for each affinity domain, a topology of the affinity domain based on the affinities of the software threads to the hardware threads. According to embodiments of the present application, a processor also performs, based on the generated topologies of the affinity domains, a collective operation on one or more software threads. | 02-13-2014 |
20140047452 | Methods and Systems for Scalable Computing on Commodity Hardware for Irregular Applications - A computing system for scalable computing on commodity hardware is provided. The computing system includes a first computing device communicatively connected to a second computing device. The first computing device includes a processor, a physical computer-readable medium, and program instructions stored on the physical computer-readable medium and executable by the processor to perform functions. The functions include determining a first task associated with the second computing device and a second task associated with the second computing device are to be executed, assigning execution of the first task and the second task to the processor of the first computing device, generating an aggregated message that includes (i) a first message including an indication corresponding to the execution of the first task and (ii) a second message including an indication corresponding to the execution of the second task, and sending the aggregated message to the second computing device. | 02-13-2014 |
20140047453 | METHOD OF PROCESSING DATA IN AN SAP SYSTEM - A method of processing data in an SAP system comprising dividing data to be processed following a request from a user endpoint into a number of intervals, providing the intervals consecutively to one or more data processors selected to service the request and storing the output of a data processor when it has processed the interval. | 02-13-2014 |
20140047454 | LOAD BALANCING IN AN SAP SYSTEM - A method of load balancing is provided in an SAP system where a central processor within the SAP system monitors the total number of processors within the system and the total number of available processors within the system. From these numbers the central processor can allocate processors to fulfil a request for data processing from an endpoint connected to the SAP system and reallocate processors in response to a new request for resources, or an alteration in the total number of processors. | 02-13-2014 |
20140059559 | INTELLEGENT TIERING - A method and system for intelligent tiering is provided. The method includes receiving a request for enabling a tiering process with respect to data. The computer processor retrieves a migration list indicating migration engines associated with the data. Additionally, an entity list of migration entities is retrieved and each migration entity is compared to associated policy conditions. In response, it is determined if matches exist between the migration entities and the associated policy conditions and a consolidated entity list is generated. | 02-27-2014 |
20140059560 | RESOURCE ALLOCATION IN MULTI-CORE ARCHITECTURES - Technologies are generally described for a method, device and architecture effective to allocate resources. In an example, the method may include associating first and second resources with first and second resource identifiers and mapping the first and resource identifiers to first and second sets of addresses in a memory, respectively. The method may include identifying that the first resource is at least partially unavailable. The method may include mapping the second resource identifier to at least one address of the first set of addresses in the memory when the first resource is identified as at least partially unavailable. The method may include receiving a request for the first resource, wherein the request identifies a particular address of the addresses in the first set of addresses. The method may include analyzing the particular address to identify a particular resource and allocating the request to the particular resource. | 02-27-2014 |
20140059561 | REALLOCATING JOBS FOR CHECKING DATA QUALITY - The invention provides for checking data quality of data of an application program by a data quality management system. At least one of a plurality of jobs are executed for evaluating the data for compliance with one or more quality criteria. The runtime behavior of the at least one executed job is monitored to determine a current runtime behavior of the executed job. The monitored job is reclassified by reallocating the job to a job set representing the determined current runtime behavior. | 02-27-2014 |
20140068624 | QUOTA-BASED RESOURCE MANAGEMENT - Innovations for quota-based resource management are described herein. For example, quota-based resource management is implemented as part of an application layer framework and/or operating system of a computing device. With the quota-based resource management, a budget is established at design time for the resources of the computing device. Each type of workload primarily draws from resources dedicated to that type of workload in the budget, as enforced by the operating system. This can help provide acceptable performance for those workloads that are permitted to run, while preventing resources of the mobile computing device from becoming spread too thin among workloads. It can also help maintain a good overall balance among different types of workloads. | 03-06-2014 |
20140068625 | DATA PROCESSING SYSTEMS - A data processing system is described in which a hardware unit is added to a cluster of processors for explicitly handling assignment of available tasks and sub-tasks to available processors. | 03-06-2014 |
20140068626 | Direct Ring 3 Submission of Processing Jobs to Adjunct Processors - Transitions to ring 0, each time an application wants to use an adjunct processor, are avoided, saving central processor operating cycles and improving efficiency. Instead, initially each application is registered and setup to use adjunct processor resources in ring 3. | 03-06-2014 |
20140075445 | MECHANISM FOR PROVIDING A ROUTING FRAMEWORK FOR FACILITATING DYNAMIC WORKLOAD SCHEDULING AND ROUTING OF MESSAGE QUEUES FOR FAIR MANAGEMENT OF RESOURCES FOR APPLICATION SERCERS IN AN ON-DEMAND SERVICES ENVIRONMENT - In accordance with embodiments, there are provided mechanisms and methods for facilitating dynamic workload scheduling and routing of message queues for fair management of the resources for application servers in an on-demand services environment. In one embodiment and by way of example, a method includes detecting an organization of a plurality of organization that is starving for resources. The organization may be seeking performance of a job request at a computing system within a multi-tenant database system. The method may further include consulting, based on a routing policy, a routing table for a plurality of queues available for processing the job request, selecting a queue of the plurality of queues for the organization based on a fair usage analysis obtained from the routing policy, and routing the job request to the selected queue. | 03-13-2014 |
20140075446 | MECHANISM FOR FACILITATING SLIDING WINDOW RESOURCE TRACKING IN MESSAGE QUEUES FOR FAIR MANAGEMENT OF RESOURCES FOR APPLICATION SERVERS IN AN ON-DEMAND SERVICES ENVIRONMENT - In accordance with embodiments, there are provided mechanisms and methods for facilitating sliding window resource tracking in message queues for fair management of resources for application servers in an on-demand services environment. In one embodiment and by way of example, a method includes monitoring, in real-time, in-flight jobs in message queues for incoming jobs from organizations in a distributed environment having application servers in communication over a network, applying local sliding windows to the message queues to estimate wait time associated with each incoming job in a message queue. A local sliding window may include segment of time being monitored in each message queue for estimating the wait time. The method may further include allocating, in real-time, based on the estimated wait time, thread resources to one or more of the incoming jobs associated with the one or more of the organizations. | 03-13-2014 |
20140075447 | PROGRAMMATIC LOAD-BASED MANAGEMENT OF PROCESSOR POPULATION - One or more measurements of processor utilization are taken. A utilization ceiling is calculated. One or more processing units (PUs) are added automatically if it is determined that the utilization ceiling is greater than an available PU capacity. One or more PUs are removed automatically responsive to determining that the utilization ceiling is at least one PU less than the available PU capacity. | 03-13-2014 |
20140075448 | ENERGY-AWARE JOB SCHEDULING FOR CLUSTER ENVIRONMENTS - A job scheduler can select a processor core operating frequency for a node in a cluster to perform a job based on energy usage and performance data. After a job request is received, an energy aware job scheduler accesses data that specifies energy usage and job performance metrics that correspond to the requested job and a plurality of processor core operating frequencies. A first of the plurality of processor core operating frequencies is selected that satisfies an energy usage criterion for performing the job based, at least in part, on the data that specifies energy usage and job performance metrics that correspond to the job. The job is assigned to be performed by a node in the cluster at the selected first of the plurality of processor core operating frequencies. | 03-13-2014 |
20140082625 | MANAGEMENT OF RESOURCES WITHIN A COMPUTING ENVIRONMENT - Resources in a computing environment are managed, for example, by a hardware controller controlling dispatching of resources from one or more pools of resources to be used in execution of threads. The controlling includes conditionally dispatching resources from the pool(s) to one or more low-priority threads of the computing environment based on current usage of resources in the pool(s) relative to an associated resource usage threshold. The management further includes monitoring resource dispatching from the pool(s) to one or more high-priority threads of the computing environment, and based on the monitoring, dynamically adjusting the resource usage threshold used in the conditionally dispatching of resources from the pool(s) to the low-priority thread(s). | 03-20-2014 |
20140082626 | MANAGEMENT OF RESOURCES WITHIN A COMPUTING ENVIRONMENT - Resources in a computing environment are managed, for example, by a hardware controller controlling dispatching of resources from one or more pools of resources to be used in execution of threads. The controlling includes conditionally dispatching resources from the pool(s) to one or more low-priority threads of the computing environment based on current usage of resources in the pool(s) relative to an associated resource usage threshold. The management further includes monitoring resource dispatching from the pool(s) to one or more high-priority threads of the computing environment, and based on the monitoring, dynamically adjusting the resource usage threshold used in the conditionally dispatching of resources from the pool(s) to the low-priority thread(s). | 03-20-2014 |
20140082627 | PARALLEL COMPUTE FRAMEWORK - A computerized system, method and program product for executing tasks in parallel, including but not limited to executing tasks in combination on multiple processors of multiple computers and/or multiple cores of a processor on a single computer and/or combinations thereof. The framework utilizes parallel computing design principles, but hides the complexities of multi-threading and multi-core programming from the programmer. | 03-20-2014 |
20140082628 | SHARED VERSIONED WORKLOAD PARTITIONS - According to one aspect of the present disclosure, a method and technique for shared versioned workload partitions is disclosed. The method includes: creating, in a host machine running an instance of a first version of an operating system, a first workload partition associated with a second version of the operating system, the second version of the operating system comprising a different version of the operating system than the first version of the operating system; creating, in the logical partition, a second workload partition associated with the second version of the operating system; and hierarchically linking the second workload partition to the first workload partition to enable sharing of resources of the first workload partition by the second workload partition. | 03-20-2014 |
20140089932 | CONCURRENCY IDENTIFICATION FOR PROCESSING OF MULTISTAGE WORKFLOWS - A system and method may be utilized to identify concurrency levels of processing stages in a distributed system, identify common resources and bottlenecks in the distributed system using the identified concurrency levels, and allocate resources in the distributed system using the identified concurrency levels. | 03-27-2014 |
20140089933 | SYSTEMS AND METHODS TO COORDINATE RESOURCE USAGE IN TIGHTLY SANDBOXED ENVIRONMENTS - Systems and methods are disclosed for coordinating resource usage between applications in a tightly sandbox environment. A scheduling indicator can be left in a system file that multiple applications can use to align their requests for a system resource. Alternatively, IP loopback can be used to pass a scheduling indicator between applications that are otherwise sandboxed. If either of these approaches is not possible, then applications can schedule system resource requests using a common algorithm that selects a start time and optionally a period of subsequent system resource requests based on a common piece of information such as a system clock signal or IP address. In these ways the total amount of time during which the system resource is being utilized by various applications can be reduced, thus reducing power consumption, and network activity. | 03-27-2014 |
20140089934 | CONCURRENCY IDENTIFICATION FOR PROCESSING OF MULTISTAGE WORKFLOWS - A system and method may be utilized to identify concurrency levels of processing stages in a distributed system, identify common resources and bottlenecks in the distributed system using the identified concurrency levels, and allocate resources in the distributed system using the identified concurrency levels. | 03-27-2014 |
20140089935 | PARALLEL PROCESSING DEVICE, PARALLEL PROCESSING METHOD, OPTIMIZATION DEVICE, OPTIMIZATION METHOD AND COMPUTER PROGRAM - [Problem] To provide a parallel processing device for improving the operation rate of each core in a computation device having a plurality of processor cores in a process in which there are a large number of tasks that can be processed in parallel even though parallelism within the tasks is low. | 03-27-2014 |
20140096142 | EFFICIENT ROLLBACK AND RETRY OF CONFLICTED SPECULATIVE THREADS WITH HARDWARE SUPPORT - A method for rolling back speculative threads in symmetric-multiprocessing (SMP) environments is disclosed. In one embodiment, such a method includes detecting an aborted thread at runtime and determining whether the aborted thread is an oldest aborted thread. In the event the aborted thread is the oldest aborted thread, the method sets a high-priority request for allocation to an absolute thread number associated with the oldest aborted thread. The method further detects that the high-priority request is set and, in response, clears the high-priority request and sets an allocation token to the absolute thread number associated with the oldest aborted thread, thereby allowing the oldest aborted thread to retry a work unit associated with the absolute thread number. A corresponding apparatus and computer program product are also disclosed. | 04-03-2014 |
20140096143 | FLEXIBLE TASK AND THREAD BINDING - A thread binding method includes generating a thread layout for processors in a computing system, allocating system resources for tasks of an application allocated to the processors, affinitizing the tasks and generating threads for the tasks. A thread count for each of the tasks is at least one and equal or unequal to that of any other of the tasks. | 04-03-2014 |
20140101664 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An information processing apparatus includes an application program information acquisition unit that acquires a resource amount to be used in each of a plurality of operation modes by an application program in operation or an application program desired to be operated, and an operation determination unit that determines, in accordance with the resource amount acquired by the application information acquisition unit, whether the application program desired to be operated is installable and/or startable. The operation program desired to be operated is installed and/or started up in response to a result provided by the operation determination unit. | 04-10-2014 |
20140101665 | OPERATION CONTROL FOR DEPLOYING AND MANAGING SOFTWARE SERVICE IN A VIRTUAL ENVIRONMENT - A system and method can deploy and manage software services in virtualized and non-virtualized environments. The system provides an enterprise application virtualization solution that allows for centralized governance and control over software and Java applications. Operations teams can define policies, based on application-level service level agreements (SLA) that govern the allocation of hardware and software resources to ensure that quality of service (QoS) goals are met across virtual and non-virtualized platforms. The system use a rules engine that can compare administrator defined constraints with runtime metrics; generate events when a constraint is violated by a metric of the runtime metrics and generate events when a constraint is violated by a metric of the runtime metrics. | 04-10-2014 |
20140101666 | SYSTEM AND METHOD OF PERFORMING A PRE-RESERVATION ANALYSIS TO YIELD AN IMPROVED FIT OF WORKLOAD WITH THE COMPUTE ENVIRONMENT - A system and method are disclosed for receiving a request for resources in a compute environment to process workload, the request including a specification of a quality of fit. The system generates a substantial maximum potential quality of fit based on compute environment with an assumption of no competing workload to yield an analysis. The system evaluates a first resource allocation and a second resource allocation against the analysis to yield the first fit in a respective second fit. The system selects one of the first resource allocation and the second resource allocation based on a comparison of the first fit to the second fit as well as a cost associated with any delays. | 04-10-2014 |
20140101667 | AUTHENTICATING A PROCESSING SYSTEM ACCESSING A RESOURCE - Provided are a method, system, and article of manufacture for authenticating a processing system accessing a resource. An association of processing system identifiers with resources, including a first and second resources, is maintained. A request from a requesting processing system in a host is received for use of a first resource that provides access to a second resource, wherein the request is generated by processing system software and wherein the request further includes a submitted processing system identifier included in the request by host hardware in the host. A determination is made as to whether the submitted processing system identifier is one of the processing system identifiers associated with the first and second resources. The requesting processing system is provided access to the first resource that the processing system uses to access the second resource. | 04-10-2014 |
20140109103 | DISTRIBUTING TRANSCODING TASKS ACROSS A DYNAMIC SET OF RESOURCES USING A QUEUE RESPONSIVE TO RESTRICTION-INCLUSIVE QUERIES - A method and system for performing processing tasks is disclosed. At a resource, a detection is made as to when the resource is available to perform a processing task. Usage of the resource for performing processing tasks associated with each client of a set of clients is monitored. A restriction limiting which processing task is to be assigned to the resource is identified. The restriction identifies a hierarchy amongst at least two clients of the set of clients. The hierarchy is based on the monitored usage. A query identifying the restriction is generated. The query is transmitted to a remote queue in communication with a plurality of independent resources. The plurality of independent resources includes the resource. A response is received from the queue. The response identifies a processing task. | 04-17-2014 |
20140109104 | JOB SCHEDULING METHOD - A method for scheduling a single subset of jobs of a set of jobs satisfying a range constraint of number of jobs, wherein the jobs of the set of jobs share resources in a computing system, each job being assigned a weight, w, indicative of the memory usage of the job in case of its execution in the computer system, the method including: for each number of jobs, x, satisfying the range constraint, determining from the set of jobs a first subset of jobs using a knapsack problem, wherein the knapsack problem is adapted to select by using the weights the first subset of jobs having the number of jobs and having a maximal total memory usage below the current available memory of the computer system, and selecting the single subset from the first subset. | 04-17-2014 |
20140115596 | CODELETSET REPRESENTATION, MANIPULATOIN, AND EXECUTION - METHOD, SYSTEM AND APPARATUS - Codeletset methods and/or apparatus may be used to enable resource-efficient computing. Such methods may involve decomposing a program into sets of codelets that may be allocated among multiple computing elements, which may enable parallelism and efficient use of the multiple computing elements. Allocation may be based, for example, on efficiencies with respect to data dependencies and/or communications among codelets. | 04-24-2014 |
20140115597 | MEDIA HARDWARE RESOURCE ALLOCATION - Apparatus, computer readable medium, and method of allocating media resources, the method including determining a media resources allocation table based on one or more media hardware resources and predetermined benchmarks of media hardware resources for performing media operations; in response to receiving a request for media resources from a first application, comparing the requested media resources with the media resources allocation table; and if the comparison indicates that the requested media resources are available, then allocating the requested media resources to the first application in the media resources allocation table, and sending a response to the request for media resources to the first application indicating the requested media resources are allocated to the application. If the comparison indicates that the requested media resources are not available, then sending indicating to the first application that the requested media resources are not allocated to the first application. | 04-24-2014 |
20140115598 | SYSTEM AND METHOD FOR CONTROLLED SHARING OF CONSUMABLE RESOURCES IN A COMPUTER CLUSTER - In one embodiment, a method includes empirically analyzing, by a computer cluster comprising a plurality of computers, a set of active reservations and a current set of consumable resources belonging to a class of consumable resources. Each active reservation is of a managed task type and comprises a group of one or more tasks task requiring access to a consumable resource of the class. The method further includes, based on the empirically analyzing, clocking the set of active reservations each clocking cycle. The method also includes, responsive to the clocking, sorting, by the computer cluster, a priority queue of the set of active reservations. | 04-24-2014 |
20140115599 | SUBMITTING OPERATIONS TO A SHARED RESOURCE BASED ON BUSY-TO-SUCCESS RATIOS - In an embodiment, an average busy-to-success ratio is calculated for partitions that submitted operations to a shared resource during a first time period. A first busy-to-success ratio for a first partition during the first time period is calculated. If the first busy-to-success ratio is greater than the average busy-to-success ratio and a difference between the first busy-to-success ratio and the average busy-to-success ratio is greater than a threshold amount, a throttle amount for the first partition is increased. A first operation from the first partition during a first time subdivision of a second time period is received. If a number of operations received from the first partition during the first time subdivision of the second time period is greater than the throttle amount for the first partition, a busy indication is returned to the first partition and the first operation is not submitted to the shared resource. | 04-24-2014 |
20140115600 | SUBMITTING OPERATIONS TO A SHARED RESOURCE BASED ON BUSY-TO-SUCCESS RATIOS - In an embodiment, an average busy-to-success ratio is calculated for partitions that submitted operations to a shared resource during a first time period. A first busy-to-success ratio for a first partition during the first time period is calculated. If the first busy-to-success ratio is greater than the average busy-to-success ratio and a difference between the first busy-to-success ratio and the average busy-to-success ratio is greater than a threshold amount, a throttle amount for the first partition is increased. A first operation from the first partition during a first time subdivision of a second time period is received. If a number of operations received from the first partition during the first time subdivision of the second time period is greater than the throttle amount for the first partition, a busy indication is returned to the first partition and the first operation is not submitted to the shared resource. | 04-24-2014 |
20140115601 | DATA PROCESSING METHOD AND DATA PROCESSING SYSTEM - A data processing method that is executed by a processor includes determining based on a size of an available area of a first memory whether first data of a first thread executed by a first data processing apparatus among a plurality of data processing apparatuses is transferable to a first memory; transferring second data that is of a second thread and stored in the first memory to second memory, when at the determining, the first data is determined to not be transferrable; and transferring the first data to the first memory. | 04-24-2014 |
20140123154 | DATA PROCESSING METHOD AND DATA PROCESSING SYSTEM - A data processing method that is executed by a data processing system includes determining whether an application whose startup is requested by a first data processing apparatus among a plurality of data processing apparatuses, belongs to a predetermined group; determining whether a second data processing apparatus among the data processing apparatuses has started up the application, when the application belongs to the predetermined group; and aborting startup of the application by the first data processing apparatus, when the second data processing apparatus has started up the application. | 05-01-2014 |
20140123155 | METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS IN DISTRIBUTED AND PARALLEL ENVIRONMENTS - Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes. | 05-01-2014 |
20140130054 | SYSTEM AND METHOD FOR CLUSTER MANAGEMENT - A system and method of managing a cluster of distributed machines is described. A cluster manager receives status updates regarding tasks running on each machine in the cluster from a task tracker running on the machine. The cluster manager receives resource requests from a job tracker created by a client wishing to run a job in the cluster. The cluster manager is responsible for implementing push-based fair scheduling of resources to the job trackers. The job tracker is responsible for running tasks for one job in the resource identified by the cluster manager. In one embodiment, the job tracker can run in the client for small jobs and in the cluster for larger jobs. The cluster manager can also be restarted, for example, for software updates without restraining the cluster. | 05-08-2014 |
20140130055 | SYSTEMS AND METHODS FOR PROVISIONING OF STORAGE FOR VIRTUALIZED APPLICATIONS - Methods and systems described herein implement an SLA-based dynamic provisioning of storage for virtualized applications or virtual machines (VMs) on shared storage. The shared storage can be located behind a storage area network (SAN) or on a virtual distributed storage system that aggregates storage across direct attached storage in the server or host, or behind the SAN or a WAN. | 05-08-2014 |
20140130056 | Parallel Execution Framework - An improved method for dividing and distributing the work of an arbitrary algorithm, having a predetermined stopping condition, for processing by multiple computer systems. A scheduler computer system accesses a representation of a plurality of work units, structured as a directed graph of dependent tasks, then transforms that graph into a weighted graph in which the weights indicate a preferred path or order of traversal of the graph, in turn indicating a preferred order for work units to be executed to reduce the impact of inter-work unit dependencies. The scheduler computer system then assigns work units to one or more worker computer systems, taking into account the preferred order. | 05-08-2014 |
20140130057 | SCHEDULING JOBS IN A CLUSTER - There is provided a method and system for scheduling a job in a cluster, the cluster comprises multiple computing nodes, and the method comprises: defining rules for constructing virtual sub-clusters of the multiple computing nodes; constructing the multiple nodes in the cluster into multiple virtual sub-clusters based on the rules, wherein one computing node can only be included in one virtual sub-cluster; dispatching a received job to a selected virtual sub-cluster; and scheduling at least one computing node for the dispatched job in the selected virtual sub-cluster. Further, the job is dispatched to the selected virtual sub-cluster based on characteristics of the job and/or characteristics of virtual sub-clusters. | 05-08-2014 |
20140137131 | FRAMEWORK FOR JAVA BASED APPLICATION MEMORY MANAGEMENT - A memory management system is implemented at an application server. The management system includes a configuration file including configuration settings for the application server and applications. The configuration settings include multiple memory management rules. The management system also includes a memory management framework configured to manage settings of resources allocated to the applications based on the memory management rules. The applications requests for the resources through one or more independently operable request threads. The management system also includes multiple application programming interfaces (APIs) configured to facilitate communication between the applications and the memory management framework. The management system further includes a monitoring engine configured to monitor an execution of the request threads and perform actions based upon the configuration settings. The actions include notifying the applications about memory related issues and taking at least one preventive action to avoid the memory related issues. | 05-15-2014 |
20140137132 | METHOD AND SYSTEM FOR MANAGING APPLICATIONS ON HOME USER EQUIPMENT - A system and method for managing an application on a home user equipment, preferably a set-top-box of a television, the method includes the steps of:
| 05-15-2014 |
20140137133 | Maximizing Throughput of Multi-user Parallel Data Processing Systems - The invention provides systems and methods for maximizing revenue generating throughput of a multi-user parallel data processing platform across a set of users of the service provided with the platform. The invented techniques, for any given user contract among the contracts supported by the platform, and on any given billing assessment period, determine a level of a demand for the capacity of the platform associated with the given contract that is met by a level of access to the capacity of the platform allocated to the given contract, and assess billables for the given contract at least in part based on such met demand and a level of assured access to the capacity of the platform associated with the given contract, as well as billing rates, applicable for the given billing assessment period, for the met demand and the level of assured access associated with the given contract. | 05-15-2014 |
20140143785 | Delegating Processing from Wearable Electronic Device - In one embodiment, an apparatus includes a wearable computing device including one or more processors and a memory. The memory is coupled to the processors and includes instructions executable by the processors. When executing the instructions, the processors analyze a task of an application; analyze one or more characteristics of the wearable computing device; determine to delegate the task based on the analysis of the task and the analysis of the characteristics; delegate the task to be processed by one or more computing devices separate from the wearable computing device; and receive from the computing devices results from processing the delegated task. | 05-22-2014 |
20140143786 | MANAGEMENT OF COPY SERVICES RELATIONSHIPS VIA POLICIES SPECIFIED ON RESOURCE GROUPS - Storage resources are organized into resource groups that are each uniquely identified by a resource group label, and each of the storage resources have at least one resource group attribute associating a storage resource object with the resource groups and associating at least one policy via one of the resource group attributes in the resource groups with the storage resources. A resource group attribute is defined to specify a policy prescribing the copy services relationships between the storage resources associated with the plurality of resource groups. A resource group label attribute of the resource group is utilized, by a policy prescribing the copy services relationships, to identify at least one of the resource groups within a storage subsystem. The resource group label attribute is used in conjunction with one of the resource group attributes in one of the resource groups and in one of a multiplicity of user ID accounts. | 05-22-2014 |
20140143787 | METHODS AND APPARATUS FOR RESOURCE MANAGEMENT IN CLUSTER COMPUTING - Embodiments of an event-driven resource management technique may enable the management of cluster resources at a sub-computer level (e.g., at the thread level) and the decomposition of jobs at an atomic (task) level. A job queue may request a resource for a job from a resource manager, which may locate a resource in a resource list and grant the resource to the job queue. After the resource is granted, the job queue sends the job to the resource, on which the job may be partitioned into tasks and from which additional resources may be requested from the resource manager. The resource manager may locate additional resources in the list and grant the resources to the resource. The resource sends the tasks to the granted resources for execution. As resources complete their tasks, the resource manager is informed so that the status of the resources in the list can be updated. | 05-22-2014 |
20140143788 | ASSIGNMENT METHOD AND MULTI-CORE PROCESSOR SYSTEM - An assignment method executed by a given core of a multi-core processor includes identifying for each core, the number of storage areas to be used by a given thread and the number of storage areas used by threads already assigned; detecting for each core, a highest value from the number of storage areas used by the threads already assigned; determining whether a sum of a greater value of the detected highest value of a core selected as a candidate assignment destination and the number of storage areas to be used by the given thread, and the detected highest value of the cores excluding the selected core, is at most the number of storage areas of the shared resource; and assigning the given thread to the selected core, when the sum is at most the number of storage areas of the shared resource. | 05-22-2014 |
20140149991 | SCHEDULING SYSTEM, DATA PROCESSING SYSTEM, AND SCHEDULING METHOD - A scheduling system includes a processor that is configured to assign a process to at least one data processing system among plural data processing systems, based on an execution request for the process; estimate time consumed for completion of a first process, when the process is the first process; and append specific information to the first process, based on the estimated time. | 05-29-2014 |
20140149992 | SYSTEM AND METHOD FOR SUPPORTING METERED CLIENTS WITH MANYCORE - In some embodiments, the invention involves partitioning resources of a manycore platform for simultaneous use by multiple clients, or adding/reducing capacity to a single client. Cores and resources are activated and assigned to a client environment by reprogramming the cores' route tables and source address decoders. Memory and I/O devices are partitioned and securely assigned to a core and/or a client environment. Instructions regarding allocation or reallocation of resources is received by an out-of-band processor having privileges to reprogram the chipsets and cores. Other embodiments are described and claimed. | 05-29-2014 |
20140157281 | PRIORITY-BASED MANAGEMENT OF SYSTEM LOAD LEVEL - Systems, methods, and computer program products are described herein for managing computer system resources. A plurality of modules (e.g., virtual machines or other applications) may be allocated across multiple computer system resources (e.g., processors, servers, etc.). Each module is assigned a priority level. Furthermore, a designated utilization level is assigned to each resource of the computer system. Each resource supports one or more of the modules, and prioritizes operation of the supported modules according to the corresponding assigned priority levels. Furthermore, each resource maintains operation of the supported modules at the designated utilization level. | 06-05-2014 |
20140157282 | method for operating a real-time critical application on a control unit - A method for operating a real-time critical application, having at least two operating modes, on a control unit of a motor vehicle having at least two parallel processor cores or microprocessors, includes: reading out configuration data assigned to a selected operating mode from a memory assigned to the control unit, the configuration data having information concerning (i) the ability of task lists assigned to the selected operating mode to be executed on each of the at least two parallel processor cores or microprocessors, and/or (ii) the processor cores to be used for the operating mode selected; and determining, on the basis of the read configuration data, which of the at least two parallel processor cores or microprocessors is necessary to operate the real-time critical application and which is able to be switched off or operated in an energy-saving mode. | 06-05-2014 |
20140157283 | ATTRIBUTING CAUSALITY TO PROGRAM EXECUTION CAPACITY MODIFICATIONS - Techniques are described for managing program execution capacity, such as for a group of computing nodes that are provided for executing one or more programs for a user. In some situations, dynamic program execution capacity modifications for a computing node group that is in use may be performed periodically or otherwise in a recurrent manner, such as to aggregate multiple modifications that are requested or otherwise determined to be made during a period of time. In addition, various operations may be performed to attribute causality information or other responsibility for particular program execution capacity modifications that are performed, including by attributing a single event as causing one capacity modification, and a combination of multiple events as possible causes for another capacity modification. The techniques may in some situations be used in conjunction with a fee-based program execution service that executes multiple programs on behalf of multiple users of the service. | 06-05-2014 |
20140165071 | METHOD AND SYSTEM FOR MANAGING ALLOCATION OF TASKS TO BE CROWDSOURCED - A method and system for managing allocation of tasks to a plurality of crowdsourcing arms is disclosed. The method includes distributing a set of tasks to the plurality of crowdsourcing arms based on a predefined condition. In response to the distributing, a verification data corresponding to the plurality of crowdsourcing arms is received after a predefined interval. The predefined condition is then updated based on the verification data received. Further, the set of tasks among the plurality of crowdsourcing arms are redistributed based on the updated predefined condition. | 06-12-2014 |
20140165072 | TECHNIQUE FOR SAVING AND RESTORING THREAD GROUP OPERATING STATE - A streaming multiprocessor (SM) included within a parallel processing unit (PPU) is configured to suspend a thread group executing on the SM and to save the operating state of the suspended thread group. A load-store unit (LSU) within the SM re-maps local memory associated with the thread group to a location in global memory. Subsequently, the SM may re-launch the suspended thread group. The LSU may then perform local memory access operations on behalf of the re-launched thread group with the re-mapped local memory that resides in global memory. | 06-12-2014 |
20140165073 | Method and System for Hardware Assisted Semaphores - A method includes receiving a request to access a resource; determining a presence of a memory buffer in a hardware-assisted memory pool; and determining a response to the request to access the resource based on the presence of the memory buffer. A system includes a plurality of processors, a resource, and a hardware-assisted memory pool including a memory buffer; one of the plurality of processors receives a request to access the resource, determines a presence of the memory buffer, and determines a response to the request to access the resource based on the presence of the memory buffer. | 06-12-2014 |
20140165074 | SOFT CO-PROCESSORS TO PROVIDE A SOFTWARE SERVICE FUNCTION OFF-LOAD ARCHITECTURE IN A MULTI-CORE ARCHITECTURE - A method of distributing functions among a plurality of cores in a multi-core processing environment can include organizing cores of the multi-core processing environment into a plurality of different service pools. Each of the plurality of service pools can be associated with at least one function and have at least one core executing at least one soft co-processor that performs the associated function. The method further can include, responsive to a request from a primary processor to offload a selected function, selecting an available soft co-processor from a service pool associated with the selected function and assigning the selected function to the selected soft co-processor. The method also can include marking the selected soft co-processor as busy and, responsive to receiving an indication from the soft co-processor that processing of the selected function has completed, marking the selected soft co-processor as available. | 06-12-2014 |
20140173611 | SYSTEM AND METHOD FOR LAUNCHING DATA PARALLEL AND TASK PARALLEL APPLICATION THREADS AND GRAPHICS PROCESSING UNIT INCORPORATING THE SAME - A system and method for launching data parallel and task parallel application threads. In one embodiment, the system includes: (1) a global thread launcher operable to retrieve a launch request from a queue and track buffer resources associated with the launch request and allocate output buffers therefor and (2) a local thread launcher associated with a streaming multiprocessor and operable to receive the launch request from the global thread launcher, set a program counter and resource pointers of pipelines of the streaming multiprocessor and receive reports from pipelines thereof as threads complete execution. | 06-19-2014 |
20140173612 | Energy Conservation and Hardware Usage Management for Data Centers - A power management and data center resource monitoring mechanism is provided for selecting new processing elements in a data center. When a condition is detected for selecting new processing elements, one or more processing elements are selected as the new processing elements based on at least a temperature parameter and a usage history parameter of at least some of the processing elements in the data center. Workload is consolidated onto the new processing elements to conserve energy. | 06-19-2014 |
20140173613 | MANAGING RESOURCE POOLS FOR DEADLOCK AVOIDANCE - In an illustrative embodiment of a method for managing a resource pool for deadlock avoidance, a computer receives a request from a thread for a connection from the resource pool, and determines whether the thread currently has at least one connection from the resource pool. Responsive to a determination that the thread currently has at least one connection from the resource pool, a new concurrent connection from one of a reserved partition of the resource pool is allocated and the connection is returned to the thread. | 06-19-2014 |
20140173614 | SENDING TASKS BETWEEN VIRTUAL MACHINES BASED ON EXPIRATION TIMES - In an embodiment, if an estimated time to perform a task by a first virtual machine is less than or equal to an expiration time of the first virtual machine minus the current time, the task is performed by the first virtual machine. If the estimated time to perform the task by the first virtual machine is greater than the expiration time of the first virtual machine minus the current time, a selected virtual machine is selected from among a plurality of virtual machines with a smallest estimated time to perform the task and a request to perform the task is sent to the selected virtual machine. | 06-19-2014 |
20140173615 | CONDITIONALLY UPDATING SHARED VARIABLE DIRECTORY (SVD) INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for conditionally updating shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer receiving a broadcast reduction operation header. The broadcast reduction operation header includes an SVD key and a first SVD address. The first SVD address is associated with the SVD key in a first SVD associated with a first task. Embodiments also include the runtime optimizer retrieving from a remote address cache associated with the second task, a second SVD address indicating a location within a memory partition associated with the first SVD, in response to receiving the broadcast reduction operation header. Embodiments also include the runtime optimizer determining that the first SVD address does not match the second SVD address and updating the remote address cache with the first SVD address. | 06-19-2014 |
20140173616 | ADAPTIVE RESOURCE USAGE LIMITS FOR WORKLOAD MANAGEMENT - According to an embodiment of the present invention, a system assigns at least one workload a hard share quantity and at least one other workload a soft share quantity or a hard share quantity. The system allocates a resource to the workloads based on the hard share quantity and the soft share quantity of active workloads in a predefined interval. A hard share quantity indicates a maximum resource allocation and a soft share quantity enables allocation of additional available processor time. Embodiments of the present invention further include a method and computer program product for allocating a resource to workloads in substantially the same manner as described above. | 06-19-2014 |
20140173617 | DYNAMIC TASK COMPLETION SCALING OF SYSTEM RESOURCES FOR A BATTERY OPERATED DEVICE - Methods, apparatuses, and computer program products for dynamic task completion scaling of system resources for a battery operated device are provided. Embodiments include determining, by task completion controller, availability of system resources; retrieving, by the task completion controller, historical user-specific task performance data corresponding to a user; and performing, by the task completion controller, a system action based on the determined availability of system resources and the retrieved historical user-specific task performance data. | 06-19-2014 |
20140173618 | SYSTEM AND METHOD FOR MANAGEMENT OF BIG DATA SETS - A system and method for predicting the amount of time and/or resources required to execute a job on a big data set, and/or a system and method for automatically providing one or more suitable commands to a user for constructing a job for manipulating a big data set. The system and method are optionally and preferably implemented with regard to Hadoop. | 06-19-2014 |
20140173619 | INFORMATION PROCESSING DEVICE AND METHOD FOR CONTROLLING INFORMATION PROCESSING DEVICE - The present invention includes a plurality of computing units executing a plurality of threads including a communication control thread to which a receiving process by polling is assigned. In a CPU core, a computing unit executing the communication control thread performs polling in a memory region indicating notification of arrival of data and waits for execution of the receiving process until arrival of data, and when a computing unit executing an application thread executes a process assigned to the application thread, the computing unit executing the communication control thread moves to a resource-saving mode in which the use of physical resources is suppressed. | 06-19-2014 |
20140173620 | RESOURCE ALLOCATION METHOD AND RESOURCE MANAGEMENT PLATFORM - The present disclosure is applicable to the technical field of computer technology. Disclosed are a resource allocation method and a resource management platform. The method includes: receiving a resource request sent by a resource requester, where the resource request includes resource demand and application feature of a resource; determining a host machine for allocating resources to the resource requester according to the resource request and a resource application feature allocation policy; and controlling the host machine to allocate resources to the resource requester and return resource allocation information to the resource requester, thus solving the problem of degrading performance of host machines as a whole in the prior art caused by hardware resource competition among a plurality of applications which share hardware resources, and efficiently improving the utilization efficiency of the server resources, reducing hardware loss, and improving the user's experience. | 06-19-2014 |
20140173621 | CONSERVING POWER THROUGH WORK LOAD ESTIMATION FOR A PORTABLE COMPUTING DEVICE USING SCHEDULED RESOURCE SET TRANSITIONS - A start time to begin transitioning resources to states indicated in the second resource state set is scheduled based upon an estimated amount of processing time to complete transitioning the resources. At a scheduled start time, a process starts in which the states of one or more resources are switched from states indicated by the first resource state set to states indicated by the second resource state set. Scheduling the process of transitioning resource states to begin at a time that allows the process to be completed just in time for the resource states to be immediately available to the processor upon entering the second application state helps minimize adverse effects of resource latency. This calculation for the time that the process should be completed just in time may be enhanced when system states and transitions between states are measured accurately and stored in memory of the portable computing device. | 06-19-2014 |
20140173622 | DATA ANALYSIS SYSTEM - The present invention, provides a method of data analysis in which data subscriptions are defined and data for that subscription be collected for analytical purposes. Supplemental queries based on new information received can be generated automatically and old queries can be eliminated automatically on the basis that they are rendered obsolete in. terms of not providing novel information in comparison to other queries and their results not being used. | 06-19-2014 |
20140181829 | PSEUDO-RANDOM HARDWARE RESOURCE ALLOCATION - Methods and apparatus for pseudo-random hardware resource allocation through a plurality of hardware elements. In an embodiment, resource list entries are configured to each identify one hardware element of the plurality of hardware elements. Index list entries are configured to each identify one resource list entry. An index list pointer is set to identify a first index list entry of the plurality of index list entries, and hardware resources are requested from a first hardware element of the plurality of hardware elements by identifying, using the index list pointer, the first index list entry; identifying, using the first index list entry, a first resource list entry; selecting the hardware element identified by the first resource list entry as the first hardware element; and sending a request for hardware resources to the first hardware element. | 06-26-2014 |
20140181830 | THREAD MIGRATION SUPPORT FOR ARCHITECTUALLY DIFFERENT CORES - According to one embodiment, a processor includes a plurality of processor cores for executing a plurality of threads, a shared storage communicatively coupled to the plurality of processor cores, a power control unit (PCU) communicatively coupled to the plurality of processors to determine, without any software (SW) intervention, if a thread being performed by a first processor core should be migrated to a second processor core, and a migration unit, in response to receiving an instruction from the PCU to migrate the thread, to store at least a portion of architectural state of the first processor core in the shared storage and to migrate the thread to the second processor core, without any SW intervention, such that the second processor core can continue executing the thread based on the architectural state from the shared storage without knowledge of the SW. | 06-26-2014 |
20140181831 | DEVICE AND METHOD FOR OPTIMIZATION OF DATA PROCESSING IN A MapReduce FRAMEWORK - A map reduce frame work for large scale data processing is optimized by the method of the invention that can be implemented by a master node. The method comprises reception of data from worker nodes on read pointer locations pointing to input data of tasks executed by these worker nodes and stealing of work from these tasks, the work being stolen being applied to input data that have not yet been processed by the task from which work is stolen. | 06-26-2014 |
20140181832 | RESOURCE ALLOCATION FOR A PLURALITY OF RESOURCES FOR A DUAL ACTIVITY SYSTEM - For resource allocation of resources for a dual activity system, each of the dual activities may be started at a static quota and is allocated its respective static quota of resources, and determining which of the dual activities is a demanding dual activity. The resource boundary may be increased for a resource request for at least one of the dual activities until a resource request for an alternative one of the at least one of the dual activities is rejected. A reduced actual resource boundary for the demanding dual activity based on a multiplicative decrease of the dual activity's actual resource boundary is calculated, and the resource boundary for the at least one of the dual activities may be reduced, and a wait after decrease mode may be commenced until a current resource usage is one of less than and equal to the reduced resource boundary. | 06-26-2014 |
20140189700 | RESOURCE MANAGEMENT FOR NORTHBRIDGE USING TOKENS - A processor uses a token scheme to govern the maximum number of memory access requests each of a set of processor cores can have pending at a northbridge of the processor. To implement the scheme, the northbridge issues a minimum number of tokens to each of the processor cores and keeps a number of tokens in reserve. In response to determining that a given processor core is generating a high level of memory access activity the northbridge issues some of the reserve tokens to the processor core. The processor core returns the reserve tokens to the northbridge in response to determining that it is not likely to continue to generate the high number of memory access requests, so that the reserve tokens are available to issue to another processor core. | 07-03-2014 |
20140189701 | METHODS, SYSTEMS AND APPARATUSES FOR PROCESSOR SELECTION IN MULTI-PROCESSOR SYSTEMS - Methods, systems and apparatuses for processor selection in multi-processor systems are disclosed. An example method includes, for each of a plurality of processors, retrieving a list of interrupt instances for a plurality of interrupt types; calculating an interrupt instance count value for each of the plurality of interrupt types; multiplying a corresponding weighting factor by the interrupt instance count value for each one of the plurality of interrupt types to generate a plurality of weighted interrupt values; calculating an overall weighted vector value based on the sum of the plurality of weighted interrupt values; and designating one of the plurality of processors as a selected processor based on the lowest overall weighted vector value. | 07-03-2014 |
20140189702 | SYSTEM AND METHOD FOR AUTOMATIC MODEL IDENTIFICATION AND CREATION WITH HIGH SCALABILITY - A system includes a library of algorithms, and a request module configured to receive an execution request. The system also includes a job scheduler/optimizer module configured to select algorithms from the library and to create at least one execution job based on the algorithms and the execution request. The system further includes a resource module configured to determine execution computing resources from multiple computing sources, including internal computing resources and external computing resources. The system also includes an executor module configured to transmit an execution job to the computing resources. | 07-03-2014 |
20140189703 | SYSTEM AND METHOD FOR DISTRIBUTED COMPUTING USING AUTOMATED PROVISONING OF HETEROGENEOUS COMPUTING RESOURCES - A system for distributed computing includes a job scheduler module configured to identify a job request including request requirements and comprising one or more individual jobs. The system also includes a resource module configured to determine an execution set of computing resources from a pool of computing resources based on the request requirements. Each computing resource of the pool of computing resources has an application programming interface. The pool of computing resources comprises public cloud computing resources and internal computing resources. The system further includes a plurality of interface modules, where each interface module is configured to facilitate communication with the computing resources using the associated application programming interface. The system also includes an executor module configured to identify the appropriate interface module based on facilitating communication with the execution computing resource and transmit jobs for execution to the execution computing resource using the interface modules. | 07-03-2014 |
20140189704 | HETERGENEOUS PROCESSOR APPARATUS AND METHOD - A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a first set of one or more physical processor cores having first processing characteristics; a second set of one or more physical processor cores having second processing characteristics different from the first processing characteristics; virtual-to-physical (V-P) mapping logic to expose a plurality of virtual processors to software, the plurality of virtual processors to appear to the software as a plurality of homogeneous processor cores, the software to allocate threads to the virtual processors as if the virtual processors were homogeneous processor cores; wherein the V-P mapping logic is to map each virtual processor to a physical processor within the first set of physical processor cores or the second set of physical processor cores such that a thread allocated to a first virtual processor by software is executed by a physical processor mapped to the first virtual processor from the first set or the second set of physical processors. | 07-03-2014 |
20140189705 | JOB HOMING - A method executed by a controller of a plurality of processing elements to reduce processing time of a data packet in a network element. The processing elements are arranged in a matrix. Each processing element has a point to point connection with each adjacent processing element, known as a hop. Each processing element also includes a separate processing element storage. The data packet includes a data and a descriptor, the data being transmitted to a first processing element for storage before the descriptor is received by the controller, and the data being processed after the descriptor is received. The method includes receiving the descriptor at the controller, determining that the first processing element does not have an available resource for processing the data, determining a second processing element based on a least number of hops to the first processing element, and transmitting the descriptor to the second processing element. | 07-03-2014 |
20140189706 | RELIABILITY-AWARE APPLICATION SCHEDULING - Reliability-aware scheduling of processing jobs on one or more processing entities is based on reliability scores assigned to processing entities and minimum acceptable reliability scores of processing jobs. The reliability scores of processing entities are based on independently derived statistical reliability models as applied to reliability data already available from modern computing hardware. Reliability scores of processing entities are continually updated based upon real-time reliability data, as well as prior reliability scores, which are weighted in accordance with the statistical reliability models being utilized. Individual processing jobs specify reliability requirements from which the minimum acceptable reliability score is determined. Such jobs are scheduled on processing entities whose reliability score is greater than or equal to the minimum acceptable reliability score for such jobs. Already scheduled jobs can be rescheduled on other processing entities if reliability scores change. Additionally, a hierarchical scheduling approach can be utilized. | 07-03-2014 |
20140189707 | Virtual Machine Placement in a Cloud-Based Network - Methods and apparatuses for real-time adaptive placement of a virtual machine are provided. In an embodiment, a virtual machine is received at a routing component, the routing component having a processor in communication with a memory. By the processor in communication with the memory, a target data center is determined from a plurality of data centers based on a data center index, and the virtual machine is routed to the target data center. A physical machine is chosen within the target data center for placing the virtual machine. | 07-03-2014 |
20140189708 | TERMINAL AND METHOD FOR EXECUTING APPLICATION IN SAME - The present invention relates to a terminal and a method for executing an application in the same, including the steps of: confirming the weight of the application when a code of the application to be executed is inputted; calculating an allocation index using the confirmed weight; selecting a processing device for executing the application between a central processing unit and a graphics processing unit through the calculated application index; and executing the application through the selected processing device. Accordingly, the present invention determines whether the execution of the application is assigned to the central processing unit or the graphics processing unit according to the weight designated by the user. Thus, the present invention can prevent the tipping effect of the workload only to one processing unit which is caused by an increase in the degree of freedom for the workload distribution. | 07-03-2014 |
20140196048 | IDENTIFYING AND THROTTLING TASKS BASED ON TASK INTERACTIVITY - The described implementations relate to processing of electronic data. One implementation is manifest as a system that can include logic and at least one processing device configured to execute the logic. The logic can be configured to receive a first task request to execute a first task that uses a resource when performed. The first task can have an associated first level of interactivity. The logic can also be configured to receive a second task request to execute a second task that also uses the resource when performed. The second task can have an associated second level of interactivity. The logic can also be configured to selectively throttle the first task and the second task based upon the first level of interactivity and the second level of interactivity. | 07-10-2014 |
20140196049 | SYSTEM AND METHOD FOR IMPROVING MEMORY USAGE IN VIRTUAL MACHINES - A method (and system) for managing memory among virtual machines in a system having a plurality of virtual machines, includes providing at least one memory optimization mechanism which can reduce memory usage of a virtual machine at a cost of increasing CPU usage. Information on memory usage and CPU usage of each virtual machine is periodically collected. In response to detecting that a first virtual machine exhibits a high level of memory use, at least one second virtual machine with extra CPU capacity is identified. The at least one memory optimization mechanism is applied to the at least one second virtual machine, to reduce memory used by the at least one second virtual machine, thereby providing a portion of freed memory. The portion of freed memory is then allocated to the first virtual machine. | 07-10-2014 |
20140196050 | PROCESSING SYSTEM INCLUDING A PLURALITY OF CORES AND METHOD OF OPERATING THE SAME - A system and method of allocating resources among cores in a multi-core system is disclosed. The system and method determine cores that are able to process tasks to be performed, and use history of usage information to select a core to process the tasks. The system may be a heterogeneous multi-core processing system, and may include a system on chip (SoC). | 07-10-2014 |
20140196051 | RESOURCE MANAGEMENT USING ENVIRONMENTS - Apparatus, systems, and methods may operate to receive time-based reservation requests for predefined resource environments comprising resource types that include hardware, software, and data, among others. Additional activities may include detecting a conflict between at least one of the resource types in a first one of the predefined resource environments and at least one of the resource types in a second one of the predefined resource environments, and resolving the conflict in favor of the first one of the predefined resource environments by reserving additional resource elements in a cloud computing architecture and/or reserving a less capable version of the second one of the predefined resource environments. Additional apparatus, systems, and methods are disclosed. | 07-10-2014 |
20140196052 | COMPUTER SYSTEM - In the present invention, a management apparatus includes a unit configured to store management information including a throughput of each of a plurality of computers, a unit configured to acquire a request value which includes a throughput that is required for executing a program from a program execution computer to which execution of a program has been assigned among a plurality of computers, a selecting unit configured to select a computer of a throughput compliant with the request value from among a plurality of computers, and a switchover control unit configured to allocate the program allocated to the program execution computer to the selected computer. | 07-10-2014 |
20140196053 | THREAD-AGILE EXECUTION OF DYNAMIC PROGRAMMING LANGUAGE PROGRAMS - Methods, systems, and products are provided for thread-agile dynamic programming language (‘DPL’) program execution. Thread-agile DPL program execution may be carried out by receiving, in a message queue, a message for an instance of a DPL program and determining whether the host application has a stored state object for the instance of the DPL program identified by the message. If the host application has a stored state object for the DPL program, thread-agile DPL program execution may also carried out by retrieving the state object; preparing a thread available from a thread pool for execution of the instance of the DPL program in dependence upon the state object and an execution context for the instance of the DPL program; providing, to an execution engine for executing the DPL program, the state object and the prepared thread; and passing the message to the execution engine. | 07-10-2014 |
20140201752 | MULTI-TENANT LICENSE ENFORCEMENT ACROSS JOB REQUESTS - Scheduling job request submitted by multiple tenants in a manner that honors multiple software license agreements for the multiple tenants. A queue persistently stores job requests that await scheduling. A job state tracking component persistently tracks a state of the job requests, and perhaps provides job requests into the queue. A software license agreement enforcer reviews the job requests in the queue, selects one or more job requests should be scheduled next based on the license agreements, and provide the selected job requests to a resource manager. A subscriber/publisher pool may be used to the various components to communicate. This decouples the communication from being a simple one-to-one correspondence, but instead allows communication from a component of one type to a component of the other type, whichever instance of those components happens to be operating. | 07-17-2014 |
20140201753 | SCHEDULING MAPREDUCE JOBS IN A CLUSTER OF DYNAMICALLY AVAILABLE SERVERS - There is provided a method, a system and a computer program product for improving performance and fairness in sharing a cluster of dynamically available computing resources among multiple jobs. The system collects at least one parameter associated with availability of a plurality of computing resources. The system calculates, based on the collected parameter, an effective processing time each computing resource can provide to each job. The system allocates, based on the calculated effective processing time, the computing resources to the multiple jobs, whereby the multiple jobs are completed at a same time or an approximate time. | 07-17-2014 |
20140201754 | WIRELESS COMMUNICATION BASE STATION AND WIRELESS COMMUNICATION METHOD - To prevent the switching time influences power saving performance and packet loss prevention performance, it is provided a wireless communication base station for communicating with a terminal, comprising: a plurality of baseband signal processing units for performing baseband signal processing; a baseband allocation unit for allocating the baseband signal processing to the plurality of baseband signal processing units; and a linear processing unit for composing signals processed by the plurality of baseband signal processing units. The baseband allocation unit selects, for each data block, a baseband signal processing unit to which the baseband signal processing for the each data block is to be allocated out of the plurality of baseband signal processing units. Each of the plurality of baseband signal processing units performs the allocated baseband signal processing. The linear processing unit composes, by means of linear calculation, the signals processed by the plurality of baseband signal processing units. | 07-17-2014 |
20140201755 | DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS - A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently. | 07-17-2014 |
20140201756 | ADAPTIVE RESOURCE USAGE LIMITS FOR WORKLOAD MANAGEMENT - According to an embodiment of the present invention, a system assigns at least one workload a hard share quantity and at least one other workload a soft share quantity or a hard share quantity. The system allocates a resource to the workloads based on the hard share quantity and the soft share quantity of active workloads in a predefined interval. A hard share quantity indicates a maximum resource allocation and a soft share quantity enables allocation of additional available processor time. Embodiments of the present invention further include a method and computer program product for allocating a resource to workloads in substantially the same manner as described above. | 07-17-2014 |
20140201757 | PROCESSOR PROVISIONING BY A MIDDLEWARE PROCESSING SYSTEM FOR A PLURALITY OF LOGICAL PROCESSOR PARTITIONS - A middleware processor provisioning process provisions a plurality of processors in a multi-processor environment. The processing capability of the multiprocessor environment is subdivided and multiple instances of service applications start protected processes to service a plurality of user processing requests, where the number of protected processes may exceed the number of processors. A single processing queue is created for each processor. User processing requests are portioned and dispatched across the plurality of processing queues and are serviced by protected processes from corresponding service applications, thereby efficiently using available processing resources while servicing the user processing requests in a desired manner. | 07-17-2014 |
20140208329 | LIVE VIRTUAL MACHINE MIGRATION QUALITY OF SERVICE - A system and method for providing quality of service during live migration includes determining one or more quality of service (QoS) specifications for one or more virtual machines (VMs) to be live migrated. Based on the one or more QoS specifications, a QoS is applied to a live migration of the one or more VMs by controlling resources including at least one of live migration network characteristics and VM execution parameters. | 07-24-2014 |
20140208330 | METHOD AND APPARATUS FOR EFFICIENT SCHEDULING OF MULTITHREADED PROGRAMS - In general, the invention relates to a non-transitory computer readable medium comprising instructions, which when executed by a processor perform a method. The method includes obtaining lock overhead times for a plurality of threads, generating a set of thread groups, wherein each of the plurality of threads is assigned to one of the plurality of thread groups based on the lock overhead times, allocating at least one core of a multi-core system to each of the plurality of thread groups, and assigning a time-quantum for each of the plurality of thread groups, wherein the time-quantum for each of the plurality of thread groups corresponds to an amount of time that threads in each of the plurality of thread groups can execute on the at least one allocated core. | 07-24-2014 |
20140215481 | ASSIGNING NODES TO JOBS BASED ON RELIABILITY FACTORS - Assigning nodes to jobs based on reliability factors includes calculating the maximum value of a processor utilization efficiency and assigning an optimal number of spare nodes to the job based on the value of the processor utilization efficiency. | 07-31-2014 |
20140215482 | UNIFIED STORAGE SYSTEM WITH A BLOCK MICRO CONTROLLER AND A HYPERVISOR - In conventional unified storage systems, an I/O for block storage and an I/O for file storage are processed in a single OS without being distinguished, so that it was not possible to perform processes for speedy failure detection or for enhancing performances such as tuning of performance by directly monitoring hardware. The present invention solves the problem by having a block storage-side OS and an OS group managing multiple systems including a file system other than the block storage-side OS coexist within a storage system, wherein the OS group managing multiple systems including a file system other than the block storage-side OS is virtualized by a hypervisor, wherein a block storage micro-controller and the hypervisor can cooperate in performing processes. | 07-31-2014 |
20140215483 | RESOURCE-USAGE TOTALIZING METHOD, AND RESOURCE-USAGE TOTALIZING DEVICE - A memory allocation/free replacing unit hooks a call of a memory allocating/freeing unit. The memory allocation/free replacing unit generates information required for totalization of a dynamically used memory amount, writes the generated information to a log file, and calls the memory allocating/freeing unit to perform memory allocation and free. A totalization processing unit loads the log file and totalizes a dynamically used memory amount for each dynamic library, for each function, or for each thread. | 07-31-2014 |
20140215484 | MANAGING MODEL BUILDING COMPONENTS OF DATA ANALYSIS APPLICATIONS - Data analysis applications include model building components and stream processing components. To increase utility of the data analysis application, in one embodiment, the model building component of the data analysis application is managed. Management includes resource allocation and/or configuration adaptation of the model building component, as examples. | 07-31-2014 |
20140215485 | SYSTEM AND METHOD OF PROVIDING A FIXED TIME OFFSET BASED DEDICATED CO-ALLOCATION OF A COMMON RESOURCE SET - Disclosed are a system, method and computer-readable medium relating to managing resources within a compute environment having a group of nodes or computing devices. The method comprises, for each node in the compute environment: traversing a list jobs having a fixed time relationship, wherein for each job in the list, the following steps occur: obtaining a range list of available timeframes for each job, converting each availability timeframe to a start range, shifting the resulting start range in time by a job offset, for a first job, copying the resulting start range into a node range, and for all subsequent jobs, logically AND'ing the start range with the node range. Next, the method comprises logically OR'ing the node range with a global range, generating a list of acceptable resources on which to start and the timeframe at which to start and creating reservations according to the list of acceptable resources for the resources in the group of computing devices and associated job offsets. | 07-31-2014 |
20140223444 | RESOURCE ASSIGNMENT FOR JOBS IN A SYSTEM HAVING A PROCESSING PIPELINE - A set of jobs to be scheduled is identified ( | 08-07-2014 |
20140223445 | Selecting a Resource from a Set of Resources for Performing an Operation - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism is configured to perform a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the identified resource is not available for performing the operation and until a resource is selected for performing the operation, the selection mechanism is configured to identify a next resource in the table and select the next resource for performing the operation when the next resource is available for performing the operation. | 08-07-2014 |
20140223446 | Application Load and Type Adaptive Manycore Processor Architecture - Systems and methods provide a processing task load and type adaptive manycore processor architecture, enabling flexible and efficient information processing. The architecture enables executing time variable sets of information processing tasks of differing types on their assigned processing cores of matching types. This involves: for successive core allocation periods (CAPs), selecting specific processing tasks for execution on the cores of the manycore processor for a next CAP based at least in part on core capacity demand expressions associated with the processing tasks hosted on the processor, assigning the selected tasks for execution at cores of the processor for the next CAP so as to maximize the number of processor cores whose assigned tasks for the present and next CAP are associated with same core type, and reconfiguring the cores so that a type of each core in said array matches a type of its assigned task on the next CAP. | 08-07-2014 |
20140237479 | Virtual Machine-to-Image Affinity on a Physical Server - Techniques, systems, and articles of manufacture for improving virtual machine-to-image affinity on a physical server. A method includes identifying physical machines in a network as candidate source physical machines, wherein each candidate source physical machine stores a first virtual machine image and a set of additional virtual machine images, identifying physical machines in the network as candidate target physical machines, wherein each candidate target physical machine stores one of the additional virtual machine images, and selecting a virtual machine image from the set of additional virtual machine images and selecting a physical machine from the candidate target physical machines such that migrating the selected virtual machine image from a candidate source physical machine to the selected target physical machine results in a maximized image affinity per virtual machine in comparison to each image migration scenarios for the set of additional virtual machine images. | 08-21-2014 |
20140237480 | METHOD, PROCESSING MODULES AND SYSTEM FOR EXECUTING AN EXECUTABLE CODE - The execution of an executable code by a set of processing modules is provided, wherein the executable code is executed by at least one first processing module of the set of processing modules, wherein said executable code comprises a set of parallel executable parts, wherein each parallel executable part of the executable code comprises at least two parallel executable steps, and wherein said executing comprises: detecting by the at least one first processing module a parallel executable part of the set of parallel executable parts of the executable code to be executed; selecting by the at least one first processing module at least two second processing modules of the set of processing modules; and commanding by the at least one first processing module the selected at least two second processing modules to perform the at least two parallel executable steps of the detected parallel executable part of the executable code. | 08-21-2014 |
20140245315 | Logic For Synchronizing Multiple Tasks - Logic (also called “synchronizing logic”) in a co-processor (that provides an interface to memory) receives a signal (called a “declaration”) from each of a number of tasks, based on an initial determination of one or more paths (also called “code paths”) in an instruction stream (e.g. originating from a high-level software program or from low-level microcode) that a task is likely to follow. Once a task (also called “disabled” task) declares its lack of a future need to access a shared data, the synchronizing logic allows that shared data to be accessed by other tasks (also called “needy” tasks) that have indicated their need to access the same. Moreover, the synchronizing logic also allows the shared data to be accessed by the other needy tasks on completion of access of the shared data by a current task (assuming the current task was also a needy task). | 08-28-2014 |
20140245316 | Background Collective Operation Management In A Parallel Computer - Background collective operation management in a parallel computer, the parallel computer including one or more compute nodes operatively coupled for data communications over one or more data communications networks, including: determining, by a management availability module, whether a compute node in the parallel computer is available to perform a background collective operation management task; responsive to determining that the compute node is available to perform the background collective operation management task, determining, by the management availability module, whether the compute node has access to sufficient resources to perform the background collective operation management task; and responsive to determining that the compute node has access to sufficient resources to perform the background collective operation management task, initiating, by the management availability module, execution of the background collective operation management task. | 08-28-2014 |
20140245317 | Resource Sharing Using Process Delay - Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments. | 08-28-2014 |
20140245318 | DATA PROCESSING WORK ALLOCATION - A processor-implemented method, system, and/or computer program product allocates computer processing work. Input data, which has been deemed to be in need of processing, is stored in a first computer. A virtual machine that is capable of processing the input data is stored on a second computer. A first set of constraint rules contains constraint rules against moving the input data from the first computer, and a second set of constraint rules contains constraint rules against moving the virtual machine from the second computer. Based on the first and second constraint rules, either the virtual machine is moved to the first computer or the input data is moved to the second computer. | 08-28-2014 |
20140245319 | METHOD FOR ENABLING AN APPLICATION TO RUN ON A CLOUD COMPUTING SYSTEM - A method for enabling an application to run on a cloud computing system so that jobs that may be computed without having to modify the application. The method includes the step of programming a task processor that relates the parameters of each task of the job to the arguments that need to be passed to an application executable on a compute node in the cloud computing system that is used to process the task. The task processor runs on any compute node in the cloud computing system. | 08-28-2014 |
20140245320 | Self-Perpetuation of a Stochastically Varying Resource Pool - A computer-readable medium has encoded thereon software for maintaining a steady-state worth of an inhomogenous renewable resource pool. The software includes instructions for causing a data-processing system to evaluate an indicator of a historical worth of the resource pool, to determine a draw amount at least in part on the basis of this indicator, and to output data representative of that draw amount. | 08-28-2014 |
20140245321 | GENERATING TIMING SEQUENCE FOR ACTIVATING RESOURCES LINKED THROUGH TIME DEPENDENCY RELATIONSHIPS - A method, computer program product, and computer system for generating a timing sequence for activating resources linked through time dependency relationships. A Direct Acyclic Graph (DAG) includes nodes and directed edges. Each node represents a unique resource and is a predefined Recovery Time Objective (RTO) node or an undefined RTO node. Each directed edge directly connects two nodes and represents a time delay between the two nodes. The nodes are topologically sorted to order the nodes in a dependency sequence of ordered nodes. A corrected RTO is computed for each ordered node after which an estimated RTO is calculated as a calculated RTO for each remaining undefined RTO node. The ordered nodes in the dependency sequence are reordered according to an ascending order of the corrected RTO of the ordered nodes to form a timing sequence for activating the unique resources represented by the multiple nodes. | 08-28-2014 |
20140250439 | SYSTEMS AND METHODS FOR PROVISIONING IN A VIRTUAL DESKTOP INFRASTRUCTURE - Systems and methods described herein facilitate provisioning virtual machines (VMs) in a virtual desktop infrastructure (VDI). The VDI includes a virtual desktop management server (VDMS), a VM, and a plurality of datastores. The VDMS includes a management module that is configured to determine a plurality of usage values that are associated with the datastores. The management module is also configured to determine one or more selection penalty values that are associated with one or more thin-provisioned VMs assigned to one or more of the datastores. Further, the management module calculates a plurality of capacity values for the datastores based at least in part on the determined usage values and the determined penalty values such that each of the capacity values corresponds to a separate datastore. Based at least in part on the capacity values, the management module is configured to assign the VM to one of the datastores. | 09-04-2014 |
20140259021 | JOB SCHEDULING IN A SYSTEM OF MULTI-LEVEL COMPUTERS - Systems, methods, and computer program products for job scheduling are disclosed. An exemplary computer-implemented method includes receiving a job in a job scheduling system. At least part of the job is transmitted to a job reader. An indication of one or more functions required for performing the job is received from the job reader. A first computing device is selected from among a plurality of computing devices, where the selection is based, at least in part, on whether the first computing device supports the functions required for performing the job. | 09-11-2014 |
20140259022 | APPARATUS AND METHOD FOR MANAGING HETEROGENEOUS MULTI-CORE PROCESSOR SYSTEM - Disclosed herein is an apparatus and method for managing a heterogeneous multi-core processor system, which can allocate a core to the execution of an application based on the states of cores included in heterogeneous multi-core processors. The apparatus for managing a heterogeneous multi-core processor system includes a management unit for receiving states of cores included in heterogeneous multi-core processors from an operating system layer and managing the states of the cores. A determination unit determines a core to be allocated to execution of an application among the cores included in the heterogeneous multi-core processors, based on the states of the cores received from the management unit. An allocation unit allocates the core determined by the determination unit to the execution of the application. | 09-11-2014 |
20140282577 | DURABLE PROGRAM EXECUTION - Aspects of the subject matter described herein relate to durable program execution. In aspects, a mechanism is described that allows a program to be removed from memory when the program is waiting for an asynchronous operation to complete. When a response for the asynchronous operation is received, completion data is stored in a history, the program is re-executed and the completion data in the history is used to complete the asynchronous operation. The above actions may be repeated until no more asynchronous operations in the history are pending completion. | 09-18-2014 |
20140282578 | LOCALITY AWARE WORK STEALING RUNTIME SCHEDULER - In one embodiment a processor comprises logic to determine a center of mass of a plurality of data dependencies associated with a task and assign the task to a processor in the system which is closest to the center of mass. Other embodiments may be described. | 09-18-2014 |
20140282579 | Processing Engine Implementing Job Arbitration with Ordering Status - A processing engine implementing job arbitration with ordering status is disclosed. A method of the disclosure includes receiving, by a job assigner communicably coupled to a plurality of processors, availability status from a plurality of job rings, availability status from the plurality of processors, and job entry completion status from an order manager, identifying, based on the received job entry completion status, a set of job rings from the plurality of job rings that do not exceed threshold conditions maintained by the job assigner, selecting, from the identified set of job rings, a job ring from which to pull a job entry for assignment, wherein the selecting is based on the received availability status of the plurality of job rings, and selecting, based on the received availability status of the plurality of processors, a processor to receive the assignment of the job entry for processing. | 09-18-2014 |
20140282580 | METHOD AND APPARATUS TO SAVE AND RESTORE SYSTEM MEMORY MANAGEMENT UNIT (MMU) CONTEXTS - A wireless mobile device includes a graphic processing unit (GPU) that has a system memory management unit (MMU) for saving and restoring system MMU translation contexts. The system MMU is coupled to a memory and the GPU. The system MMU includes a set of hardware resources. The hardware resources may be context banks, with each of the context banks having a set of hardware registers. The system MMU also includes a hardware controller that is configured to restore a hardware resource associated with an access stream of content issued by an execution thread of the GPU. The associated hardware resource may be restored from the memory into a physical hardware resource when the hardware resource associated with the access stream of content is not stored within one of the hardware resources. | 09-18-2014 |
20140282581 | METHOD AND APPARATUS FOR PROVIDING A COMPONENT BLOCK ARCHITECTURE - A method, apparatus and computer program product are therefore provided in order to provide a component block architecture for allocation of resources in a data center environment. In this regard, the method, apparatus, and computer program product may identify a set of block attributes for a particular block of one or more applications, and compare the attributes to the available resources of a container. The component block may be allocated to the container based on whether the resources of the container are sufficient to meet the requirements of the component block. | 09-18-2014 |
20140282582 | DETECTING DEPLOYMENT CONFLICTS IN HETEROGENOUS ENVIRONMENTS - Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is detected between the first application and at least a second application. | 09-18-2014 |
20140282583 | DYNAMIC MEMORY MANAGEMENT WITH THREAD LOCAL STORAGE USAGE - Methods and arrangements for dynamic memory management. Data are accepted for thread local storage, and memory usage is monitored in thread local storage. A memory block is allocated to thread local storage for storing accepted data, based on the monitored memory usage. Other variants and embodiments are broadly contemplated herein. | 09-18-2014 |
20140282584 | Allocating Accelerators to Threads in a High Performance Computing System - A method of distributing threads among accelerators in a high performance computing system receives a request to assign an accelerator in the computing system to a thread. The request includes a mode indicative of location and exclusivity of the accelerator for use by the thread. The method selects the accelerator according to a processor assigned to the thread. The method also assigns the accelerator to the thread with the exclusivity specified in the request. | 09-18-2014 |
20140282585 | Organizing File Events by Their Hierarchical Paths for Multi-Threaded Synch and Parallel Access System, Apparatus, and Method of Operation - A cloud file event server transmits file events necessary to synchronize a file system of a file share client. A tree queue director circuit receives file events and stores each one into a tree data structure which represents the hierarchical paths of files within the file share client. An event normalization circuit sorts the file events stored at each node into sequential order and moots file events which do not have to be performed because a later file event makes them inconsequential. A thread scheduling circuit assigns a resource to perform file events at a first node in a hierarchical path before assigning one or more resources to a second node which is a child of the first node until interrupted by the tree queue director circuit or until all file events in the tree data structure have been performed. | 09-18-2014 |
20140282586 | PURPOSEFUL COMPUTING - A system, method, and computer-readable storage medium configured to facilitate user purpose in a computing architecture. | 09-18-2014 |
20140282587 | MULTI-CORE BINARY TRANSLATION TASK PROCESSING - Embodiments of techniques and systems associated with binary translation (BT) in computing systems are disclosed. In some embodiments, a BT task to be processed may be identified. The BT task may be associated with a set of code and may be identified during execution of the set of code on a first processing core of the computing device. The BT task may be queued in a queue accessible to a second processing core of the computing device, the second processing core being different from the first processing core. In response to a determination that the second processing core is in an idle state or has received an instruction through an operating system to enter an idle state, at least some of the BT task may be processed using the second processing core. Other embodiments may be described and/or claimed. | 09-18-2014 |
20140282588 | SYSTEM AND SCHEDULING METHOD - A system includes a CPU; an accelerator; a comparing unit that compares a first value that is based on a first processing time period elapsing until the CPU completes a first process and a second processing time period elapsing until the accelerator completes the first process, and a second value that is based on a state of use of a battery driving the CPU and the accelerator; and a selecting unit that selects any one among the CPU and the accelerator, based on a result of comparison by the comparing unit. | 09-18-2014 |
20140282589 | QUOTA-BASED ADAPTIVE RESOURCE BALANCING IN A SCALABLE HEAP ALLOCATOR FOR MULTITHREADED APPLICATIONS - One embodiment comprises a hierarchical heap allocator system. The system comprises a system-level allocator for monitoring run-time resource usage information for an application having multiple application threads. The system further comprises a process-level allocator for dynamically balancing resources between the application threads based on the run-time resource usage information. The system further comprises multiple thread-level allocators. Each thread-level allocator facilitates resource allocation and resource deallocation for a corresponding application thread. | 09-18-2014 |
20140282590 | Compute-Centric Object Stores and Methods Of Use - Systems and methods for providing a compute-centric object store. An exemplary method may include receiving a request to perform a compute operation on at least a portion of an object store from a first user, the request identifying parameters of the compute operation, assigning virtual operating system containers to the objects of the object store from a pool of virtual operating system containers. The virtual operating system containers may perform the compute operation on the objects according to the identified parameters of the request. The method may also include clearing the virtual operating system containers and returning the virtual operating system containers to the pool. | 09-18-2014 |
20140282591 | ADAPTIVE AUTOSCALING FOR VIRTUALIZED APPLICATIONS - Virtualized applications are autoscaled by receiving performance data in time-series format from a running virtualized application, computationally analyzing the performance data to determine a pattern therein, and extending the performance data to a time in the future based at least on the determined pattern. The extended performance data is analyzed to determine if resources allocated to the virtualized application are under-utilized or over-utilized, and a schedule for re-allocating resources to the virtualized application based at least in part on a result of the analysis of the extended performance data is created. | 09-18-2014 |
20140282592 | METHOD FOR EXECUTING MULTITHREADED INSTRUCTIONS GROUPED INTO BLOCKS - A method for executing multithreaded instructions grouped into blocks. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks, wherein the instructions of the instruction blocks are interleaved with multiple threads; scheduling the instructions of the instruction block to execute in accordance with the multiple threads; and tracking execution of the multiple threads to enforce fairness in an execution pipeline. | 09-18-2014 |
20140282593 | Scheduling in a multicore architecture - The disclosure relates to scheduling threads in a multicore processor. Executable transactions may be scheduled using at least one distribution queue, which lists executable transactions in order of eligibility for execution, and multilevel scheduler which comprises a plurality of linked individual executable transaction schedulers. Each of these includes a scheduling algorithm for determining the most eligible executable transaction for execution. The most eligible executable transaction is outputted from the multilevel scheduler to the at least one distribution queue. | 09-18-2014 |
20140289733 | SYSTEM AND METHOD FOR EFFICIENT TASK SCHEDULING IN HETEROGENEOUS, DISTRIBUTED COMPUTE INFRASTRUCTURES VIA PERVASIVE DIAGNOSIS - A system and method schedules jobs in a cluster of compute nodes. A job with an unknown resource requirement profile is received. The job includes a plurality of tasks. Execution of some of the plurality of tasks is scheduled on compute nodes of the cluster with differing capability profiles. Timing information regarding execution time of the scheduled tasks is received. A resource requirement profile for the job is inferred based on the received timing information and the differing capability profiles. Execution of remaining tasks of the job is scheduled on the compute nodes of the cluster using the resource requirement profile. | 09-25-2014 |
20140289734 | CACHE MANAGEMENT IN A MULTI-THREADED ENVIRONMENT - Disclosed here are methods, systems, paradigms and structures for deleting shared resources from a cache in a multi-threaded system. The shared resources can be used by a plurality of requests belonging to multiple threads executing in the system. When requests, such as requests for executing script code, and work items, such as work items for deleting a shared resource, are created, a global sequence number is assigned to each of them. The sequence number indicates the order in which the requests and work items are created. A particular work item can be executed to delete the shared resource if there are no requests having a sequence number lesser than that of the particular work item executing in the system. However, if there is at least one request with a sequence number lesser than that of the particular work item executing, the work item is ignored until the request completes executing. | 09-25-2014 |
20140289735 | CAPACITY MANAGEMENT SUPPORT APPARATUS, CAPACITY MANAGEMENT METHOD AND PROGRAM - A log acquisition unit ( | 09-25-2014 |
20140289736 | MANAGING MULTIPLE SYSTEMS IN A COMPUTER DEVICE - Resources of multiple systems are managed in a computer device. A first processing system having a set of dedicated resources also has a resource manager to manage at least one of the resources. The first processing system is prevented from directly accessing the resources without authorization. A second processing system, connected to the set of dedicated resources, has a supervisor application to grant control to individual resources to the resource manager of the first processing system. A computer program is executed in the first processing system. The supervisor application grants control of at least one resource to the resource manager of the first processing system in a way that is transparently to the computer program executing in the first processing system. | 09-25-2014 |
20140298345 | METHOD FOR ACTIVATING PROCESSOR CORES WITHIN A COMPUTER SYSTEM - A technique for activating processor cores within a computer system is disclosed. Initially, a value representing a number of processor cores to be enabled within the computer system is received. The computer system includes multiple processors, and each of the processors includes multiple processor cores. Next, a scale variable value representing a specific type of tasks to be optimized during an execution of the tasks within the computer system is received. From a pool of available processor cores within the computer system, a subset of processor cores can be selected for activation. The subset of processor cores is activated in order to achieve system optimization during an execution of the tasks. | 10-02-2014 |
20140298346 | MANAGEMENT OF TASK ALLOCATION IN A MULTI-CORE PROCESSING SYSTEM - A system and method for management of task allocation in a multi-core processing system. A controller of the processing unit may, at an initialization stage determine a number of worker threads to be a prime number not smaller than a multiplication of the number of the processing cores and a predetermined factor, assign a worker identification number (ID) to each worker thread, wherein the worker IDs are consecutive positive integers ranging from zero to the number of workers minus one. At a processing state the controller may receive from a dispatcher of the processing system a task associated with a numeric context ID and designate the task to one of the worker threads, wherein the worker ID of the designated worker thread equals the numeric context ID of the task, modulo the number of worker threads. | 10-02-2014 |
20140298347 | COMPUTING SYSTEM WITH RESOURCE MANAGEMENT MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: an activity-schedule module configured to identify a future activity estimation for representing an activity occurring after a current time; a usage module, coupled to the activity-schedule module, configured to generate a consumption model associated with the future activity estimation for describing a resource; a model generator module, coupled to the usage module, configured to determine a cost model for evaluating an access location; and a selection module, coupled to the model generator module, configured to determine an optimal access selection based on the cost model and the consumption model for displaying on a device. | 10-02-2014 |
20140298348 | PROVIDING A MANAGED BROWSER - Methods, systems, computer-readable media, and apparatuses for providing a managed browser are presented. In various embodiments, a computing device may load a managed browser. The managed browser may, for instance, be configured to provide a managed mode in which one or more policies are applied to the managed browser, and an unmanaged mode in which such policies might not be applied and/or in which the browser might not be managed by at least one device manager agent running on the computing device. Based on device state information and/or one or more policies, the managed browser may switch between the managed mode and the unmanaged mode, and the managed browser may provide various functionalities, which may include selectively providing access to enterprise resources, based on such state information and/or the one or more policies. | 10-02-2014 |
20140298349 | System and Method for Managing Energy Consumption in a Compute Environment - Disclosed are systems and methods of performing a power cap processing in a compute environment. The method includes determining of one of committed resources and dedicated resources in a compute environment exceed a threshold value for a job. If a determination is yes that the threshold value is exceeded, then the method includes preempting processing of the job in the compute environment by performing one of migrating the job to a new compute resources and performing a power reduction action associated with the job, such as slowing down a processor associated with a job or cancelling the job. When such a power state reduction action is taken, reservations associated with other jobs may also be adjusted. | 10-02-2014 |
20140298350 | DISTRIBUTED PROCESSING SYSTEM - A management node includes a distributed task management unit that divides a task including a plurality of processing targets and allocates the task to a plurality of execution nodes, and an execution status information memory update unit that updates execution status information of the task in accordance with execution status update requests from the execution nodes. Based on a first period of time required for processing the processing targets of a unit amount by the execution node and a second period of time required for processing the execution status update request by the management node, the distributed task management unit determines the amount of the task allocated to each of the execution nodes such that a difference in the completion time of the task allocated to any two execution nodes, among the execution nodes, becomes greater than the second period of time. | 10-02-2014 |
20140304711 | METHOD FOR PROVIDING CONTROLLED ACCESS TO HARDWARE RESOURCES ON A MULTIMODE DEVICE - A software environment ( | 10-09-2014 |
20140310719 | SYSTEM AND METHOD FOR CONTEXT-AWARE ADAPTIVE COMPUTING - The present disclosure relates to systems and methods for context-aware adaptive computing. In one embodiment, the present disclosure includes a method comprising receiving a request at a first information handling system (IHS) to perform an application computation. The method also includes determining a user's context, the user operating the first IHS, and ascertaining a battery state of the first IHS. The method further includes allocating the application computation between the first IHS and a second IHS based at least on the user's context and the battery state of the first IHS. The present disclosure also includes associated systems and apparatuses. | 10-16-2014 |
20140310720 | APPARATUS AND METHOD OF PARALLEL PROCESSING EXECUTION - An apparatus and method of parallel processing execution that executes a job through distributing the job to a plurality of calculators, based on a calculation property of the job. The apparatus for parallel processing execution may include a plurality of calculators to calculate a job configuring a plurality of tasks of a process, and a distributor to distribute the job to a plurality of calculators based on a calculation property of the job, wherein the plurality of calculators includes a first calculator to process a job through a controlled calculation, and a second calculator to process a job through a large volume calculation. | 10-16-2014 |
20140310721 | REDUCING THE NUMBER OF READ/WRITE OPERATIONS PERFORMED BY A CPU TO DUPLICATE SOURCE DATA TO ENABLE PARALLEL PROCESSING ON THE SOURCE DATA - Methods and apparatuses to reduce the number of read/write operations performed by a CPU may involve duplicating source data to enable parallel processing on the source data. A memory controller may be configured to duplicate data written to a first buffer to one or more duplicate buffers that are allocated to one or more processing threads, respectively. In some implementations, the one or more duplicate buffers are dedicated buffers, and the addresses of the first buffer and the one or more duplicate buffers are stored in a register of memory controller. | 10-16-2014 |
20140325520 | APPLICATION THREAD TO CACHE ASSIGNMENT - Techniques are described for assigning an application thread to a cache. A newly created application thread may be assigned to a plurality of caches. The cache assignment that optimizes performance may be determined. The newly created application thread may be associated with the determined cache. | 10-30-2014 |
20140325521 | Method and System for Allocating Resources to Tasks in a Build Process - Allocating resources for tasks in a build process is provided. The build process includes a plurality of tasks. Task metadata is obtained. The task metadata comprising a task type of a second task in the plurality of tasks. Execution metadata is obtained. The execution metadata comprising an execution result of a first task in the plurality of tasks. The second task depends on the execution result of the first task. A resource required by the second task is determined according to the task metadata and the execution metadata. | 10-30-2014 |
20140325522 | METHOD AND DEVICE FOR SCHEDULING VIRTUAL DISK INPUT AND OUTPUT PORTS - Embodiments of the present application relate to a method for scheduling virtual disk input and output (I/O) ports, a device for scheduling virtual disk I/O ports, and a computer program product for scheduling virtual disk I/O ports. A method for scheduling virtual disk I/O ports is provided. The method includes assigning a set of service quality ratings to a corresponding set of virtual disk I/O ports based on a set of reading-writing bandwidth quotas associated with the corresponding set of virtual disk I/O ports in a physical machine, determining a total forecast value of a data bandwidth to be used by reading-writing requests and determining virtual disk I/O ports, allocating reading-writing bandwidth limits to the virtual disk I/O ports, and scheduling virtual disk I/O ports on the physical machine. | 10-30-2014 |
20140325523 | SCHEDULING COMPUTER PROGRAM JOBS - A method and system for scheduling, for periodic execution, a program requiring a computer hardware resource for execution. A computer determines and records historic utilization or availability of the resource multiple times a day. The computer subsequently receives a request to schedule the program for execution on the day at a specified time and (a) daily, (b) weekly, or (c) monthly at the specified time, and in response, the computer determines if there has been historical availability of the resource exceeding a predetermined availability threshold on the day at approximately the specified time to execute the program, and if so, schedule the program for execution on the day at the specified time and (i) daily, (ii) weekly, or (iii) monthly thereafter, as requested, and if not, not schedule the program for execution on the day at the specified time or (i) daily, (ii) weekly, or (iii) monthly thereafter, as requested. | 10-30-2014 |
20140331235 | RESOURCE ALLOCATION APPARATUS AND METHOD - The present invention relates to a resource allocation apparatus and method. The resource allocation apparatus includes a job information management unit for managing job characteristic information required to execute jobs input by a user. A resource form selection unit selects an initial resource allocation form required to execute each job, based on the job characteristic information. A resource allocation unit allocates resources required to execute the job based on the initial resource allocation form. | 11-06-2014 |
20140337853 | Resource And Core Scaling For Improving Performance Of Power-Constrained Multi-Core Processors - A multi-core processor provides circuitry for jointly scaling the number of operating cores and the amount of resources per core in order to maximize processing performance in a power-constrained environment. Such scaling is advantageously provided without the need for scaling voltage and frequency. Selection of the number of operating cores and the amount of resources per core is made by examining the degree of instruction and thread level parallelism available for a given application. Accordingly, performance counters (and other characteristics) implemented in by a processor may be sampled on-line (in real time) and/or performance counters for a given application may be profiled and characterized off-line. As a result, improved processing performance may be achieved despite decreases in core operating voltages and increases in technology process variability over time. | 11-13-2014 |
20140344826 | Architecture for Efficient Computation of Heterogeneous Workloads - Embodiments of a workload management architecture may include an input configured to receive workload data for a plurality of commands, a DMA block configured to divide the workload data for each command of the plurality of commands into a plurality of job packets, a job packet manager configured to assign one of the plurality of job packets to one of a plurality of fixed function engines (FFEs) coupled with the job packet manager, where each of the plurality of FFEs is configured to receive one or more of the plurality of job packets and generate one or more output packets based on the workload data in the received one or more job packets. | 11-20-2014 |
20140344827 | SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR SCHEDULING A TASK TO BE PERFORMED BY AT LEAST ONE PROCESSOR CORE - A system, method, and computer program product are provided for scheduling a task to be performed by at least one processor core. In operation, a task to be performed by at least one of a plurality of processor cores is identified. Additionally, a temperature of each of the plurality of processor cores is determined. Further, a first processor core of the plurality of processor cores is identified based on at least the determined temperature of each of the plurality of processor cores and, in one embodiment, spatial information associated with each of the plurality of processor cores. Still yet, at least a portion of the task is scheduled to be performed by the first processor core. | 11-20-2014 |
20140344828 | ASSIGNING LEVELS OF POOLS OF RESOURCES TO A SUPER PROCESS HAVING SUB-PROCESSES - Provided are a computer program product, system, and method for assigning levels of pools of resources in an operating system to a super process having sub-processes. A plurality of first level pools of resources are reserved in the operating system for first level processes to perform a first level operation and invoke at least one second level process to perform a second level operation. A plurality of second level pools of resources are reserved in the operating system for second level processes. One of the second level pools of resources assigned to one of the second level processes is released and available to assign to another second level process when the second level process completes the second level operation for which it was invoked. | 11-20-2014 |
20140344829 | DATA PROCESSING METHOD OF SHARED RESOURCE ALLOCATED TO MULTI-CORE PROCESSOR, ELECTRONIC APPARATUS WITH MULTI-CORE PROCESSOR AND DATA OUTPUT APPARATUS - A data processing method is a shared resource which is allocated to a multi-core processor includes receiving a first data stream from a first processor, when a second data stream is received from a second processor before processing of the first data stream is complete, locating the second data stream in front of a data stream which is on standby from among the first data stream, and processing the located second data stream and the first data stream on standby in sequence. | 11-20-2014 |
20140351821 | Strategic Placement of Jobs for Spatial Elasticity in a High-Performance Computing Environment - Accepting a job having a job size representing a number or quantity of processors; computing an expected size, and a standard deviation in size, for the accepted job; adding the expected size to the standard deviation in size to determine a sum; comparing the sum to a number or quantity of available clusters at each of a plurality of non-leaf nodes of a tree representing a high-performance computing environment; and when the number or quantity of available clusters is more than the sum at a sub-tree of the tree and, going down one level further in the sub-tree, the number of available clusters is less than the sum, selecting the sub-tree for the accepted job such that the accepted job is placed on one or more clusters associated with the selected sub-tree. | 11-27-2014 |
20140351822 | CONTROLLING SOFTWARE PROCESSES THAT ARE SUBJECT TO COMMUNICATIONS RESTRICTIONS - Controlling a software process by causing the execution of a first software process on a computer, where the first software process is configured to exclusively access a resource on the computer, causing the execution of a second software process on the computer when the first software process has exclusive access to the resource, where the second software process is configured to perform a first predefined action that is independent of the second software process accessing the resource, attempt to access the resource, and perform a second predefined action that is dependent on the second software process accessing resource, and causing the first software process to terminate its exclusive access to the resource, thereby causing the second software process to access the resource and perform the second predefined action. | 11-27-2014 |
20140351823 | Strategic Placement of Jobs for Spatial Elasticity in a High-Performance Computing Environment - Accepting a job having a job size representing a number or quantity of processors; computing an expected size, and a standard deviation in size, for the accepted job; adding the expected size to the standard deviation in size to determine a sum; comparing the sum to a number or quantity of available clusters at each of a plurality of non-leaf nodes of a tree representing a high-performance computing environment; and when the number or quantity of available clusters is more than the sum at a sub-tree of the tree and, going down one level further in the sub-tree, the number of available clusters is less than the sum, selecting the sub-tree for the accepted job such that the accepted job is placed on one or more clusters associated with the selected sub-tree. | 11-27-2014 |
20140351824 | System and Method of Interfacing a Workload Manager and Scheduler with an Identity Manager - A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers. | 11-27-2014 |
20140359633 | THREAD ASSIGNMENT FOR POWER AND PERFORMANCE EFFICIENCY USING MULTIPLE POWER STATES - A method is performed in a computing system that includes a plurality of processing nodes of multiple types configurable to run in multiple performance states. In the method, an application executes on a thread assigned to a first processing node. Power and performance of the application on the first processing node is estimated. Power and performance of the application in multiple performance states on other processing nodes of the plurality of processing nodes besides the first processing node is also estimated. It is determined that the estimated power and performance of the application on a second processing node in a respective performance state of the multiple performance states is preferable to the power and performance of the application on the first processing node. The thread is reassigned to the second processing node, with the second processing node in the respective performance state. | 12-04-2014 |
20140359634 | INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING APPARATUS - An information processing method includes acquiring sets of execution information of a plurality of information processes executed by a first information processing apparatus, converting the usage time in each set of execution information into usage time on a second information processing apparatus, executing a resource allocation process of allocating the resource of the second information processing apparatus to a first information process during the converted usage time, allocating the resource of the second information processing apparatus to a second information process for idle time not allocated to the first information process during the converted usage time, and accumulating virtual run time of the allocated resources, and estimating execution time when executing the plurality of information processes on the second information processing apparatus on the basis of the accumulated virtual run time. | 12-04-2014 |
20140366033 | DATA PROCESSING SYSTEMS - When an atomic operation is to be executed for a thread group by an execution stage of a data processing system, it is determined whether there is a set of threads for which the atomic operation for the threads accesses the same memory location. If so, the arithmetic operation for the atomic operation is performed for the first thread in the set of threads using an identity value for the arithmetic operation for the atomic operation and the first thread's register value for the atomic operation, and is performed for each other thread in the set of threads using the thread's register value for the atomic operation and the result of the arithmetic operation for the preceding thread in the set of threads, to thereby generate for the final thread in the identified set of threads a combined result of the arithmetic operation for the set of threads. | 12-11-2014 |
20140366034 | IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF AND STORAGE MEDIUM - To accomplish this, an image processing apparatus, in response to a start-up request for an application, reads a class file of a class of the application, adds, at the beginning of a method included in the read class file, code for recording application information indicating the application to a thread, and loads the class. Furthermore, the image processing apparatus, during execution of the method included in the read class file, allocates memory or a file size to be used for an object to be generated and records application information recorded in the thread to the allocated memory or file size, together with generating the object and managing application information of the generated object, in association with memory size or disk usage. | 12-11-2014 |
20140373024 | REAL TIME PROCESSOR - One aspect of the disclosure provides an embodiment of a method of processing data in a processor in a shared resource computer system, where a processing module shares at least one resource with at least one other processing module. The method, implemented by a processor, comprises receiving a resource allocation for executing code and monitoring a resource related condition of the processor in the execution of the code at a current resource level. The method further comprises recognizing a resource constraint when the resource allocation is insufficient to meet real time constraints for executing the code at the current resource level and modifying operation of the processor responsive to recognizing the resource constraint to execute the code to meet the real time constraints at a cost of increased power consumption or reduced quality of output. | 12-18-2014 |
20140373025 | METHOD FOR ALLOCATING PROCESS IN MULTI-CORE ENVIRONMENT AND APPARATUS THEREFOR - Disclosed are a method for allocating processes and an apparatus for allocating processes. The method may comprise determining a core among the plurality of cores to execute a requested process based on the performance information of the plurality of cores; and allocating the requested process to the determined core. According to the present invention, processes are allocated to cores according to a performance demanded by each of the processes so that processing speed of the processes may be enhanced and power consumption of each of the cores may be reduced. | 12-18-2014 |
20140373026 | Method, Apparatus and System for Coordinating Execution of Tasks in a Computing System Having a Distributed Shared Memory - A task coordination apparatus in a computing system having a distributed shared memory (DSM) coordinates the execution of two related tasks, wherein the second task has an execution variable which is modified by the first task. The task coordination apparatus creates a snapshot of a memory space in the distributed shared memory assigned to the first task and a cooperation watching area of the second task. The cooperation watching area contains a memory address pointing to a location where the execution variable of the second task is stored in the memory space assigned to the first task. The first task is allocated to a first computing node for execution, and the memory space assigned to it is updated according to the execution result. After updating the memory space, the second task is allocated to a second computing node for execution using the execution variable updated by the first task. | 12-18-2014 |
20140380329 | CONTROLLING SPRINTING FOR THERMAL CAPACITY BOOSTED SYSTEMS - A method and apparatus are described for performing sprinting in a processor. An analyzer in the processor may monitor thermal capacity remaining in the processor while not sprinting. When the remaining thermal capacity is sufficient to support sprinting, the analyzer may perform sprinting of a new workload when a benefit derived by sprinting the new workload exceeds a threshold and does not cause the remaining thermal capacity in the processor to be exhausted. The analyzer may perform sprinting of the new workload in accordance with sprinting parameters determined for the new workload. The analyzer may continue to monitor the remaining thermal capacity while not sprinting when the benefit derived by sprinting the new workload does not exceed the threshold. | 12-25-2014 |
20140380330 | TOKEN SHARING MECHANISMS FOR BURST-MODE OPERATIONS - Methods and apparatus for token-sharing mechanisms for burst-mode operations are disclosed. A first and a second token bucket are respectively configured for admission control at a first and a second work target. A number of tokens to be transferred between the first bucket and the second bucket, as well as the direction of the transfer, are determined, for example based on messages exchanged between the work targets. The token transfer is initiated, and admission control decisions at the work targets are made based on the token population resulting from the transfer. | 12-25-2014 |
20140380331 | SYSTEM AND METHOD FOR RECEIVING ANALYSIS REQUESTS AND CONFIGURING ANALYTICS SYSTEMS - A method for analyzing data is disclosed that includes receiving an analysis request to analyze selected data corresponding to one or more monitored assets, wherein the analysis request includes one or more parameters corresponding to performance categories of computing resources for processing the analysis request, the performance categories include at least one of a time for processing the analysis request or a cost for processing the analysis request; determining a computing resource allocation plan for processing the analysis request based on the one or more parameters; and processing the analysis request using the determined computing resource allocation plan to provide analysis results. Also disclosed is an analytic router that includes a mapper, an estimator, an optimizer, and a resource provisioner. | 12-25-2014 |
20140380332 | Managing Service Level Objectives for Storage Workloads - Described herein is a system and method for dynamically managing service-level objectives (SLOs) for workloads of a cluster storage system. Proposed states/solutions of the cluster may be produced and evaluated to select one that achieves the SLOs for each workload. A planner engine may produce a state tree comprising nodes, each node representing a proposed state/solution. New nodes may be added to the state tree based on new solution types that are permitted, or nodes may be removed based on a received time constraint for executing a proposed solution or a client certification of a solution. The planner engine may call an evaluation engine to evaluate proposed states, the evaluation engine using an evaluation function that considers SLO, cost, and optimization goal characteristics to produce a single evaluation value for each proposed state. The planner engine may call a modeler engine that is trained using machine learning techniques. | 12-25-2014 |
20150020076 | METHOD TO APPLY PERTURBATION FOR RESOURCE BOTTLENECK DETECTION AND CAPACITY PLANNING - Inducing perturbation by varying a supply amount of the resource type in the system and measuring performance of the software entity at multiple variation levels of the supply amount of the resource type in the system. A model may be built that characterizes a relationship between the measured performance and the variation levels. The model may be applied to detect the resource bottleneck. The model may be also applied for capacity planning. | 01-15-2015 |
20150020077 | Resource Restriction Systems and Methods - Resource restrictions are associated with a user identifier. A resource restriction agent receives operating system calls related for resources and provides resource request data to a resource agent. The resource agent determines whether the resource is restricted based on the resource request data and resource restriction data and generates access data based on the determination. The resource restriction agent grants or denies the system call based on the access data. | 01-15-2015 |
20150026696 | SYSTEMS AND METHODS FOR SCHEDULING VEHICLE-RELATED TASKS - A system and method that include a task identification module configured to identify one or more tasks to be performed with respect to a vehicle. The task identification module outputs task identification data that relates to the task(s) to be performed with respect to the vehicle. A task estimation module estimates a time of completion for each of the one or more tasks. The task estimation module outputs task estimation data that relates to the time of completion for each of the task(s). The system and method may include a scheduling module to generate a vehicle task schedule based on one or both of the task identification data and the task estimation data. | 01-22-2015 |
20150033238 | SYSTEM COMPRISING A CLUSTER OF SHARED RESOURCES COMMON TO A PLURALITY OF RESOURCE AND TASK MANAGERS - A system is provided including at least two resource and task managers which are independent of each other; a cluster of shared resources common to these managers; software that runs in the background interfaced with the managers in a manner so as to appropriately distribute the resources of the cluster between the managers on the basis of one or more distribution parameters. | 01-29-2015 |
20150040135 | THRESHOLDING TASK CONTROL BLOCKS FOR STAGING AND DESTAGING - For thresholding task control blocks (TCBs) for staging and destaging, a first tier of TCBs are reserved for guaranteeing a minimum number of TCBs for staging and destaging for storage ranks An additional number of requested TCBs are apportioned from a second tier of TCBs to each of the storage ranks based on a scaling factor that is calculated at predefined time intervals. | 02-05-2015 |
20150040136 | SYSTEM CONSTRAINTS-AWARE SCHEDULER FOR HETEROGENEOUS COMPUTING ARCHITECTURE - Processors, systems, and methods are arranged to schedule tasks on heterogeneous processor cores. For example, a scheduler is arranged to perform a heuristics based function for allocating operating system tasks to the processor cores. The system includes a hint generator providing a system constraints-aware function that biases the scheduler to select a processor core depending on the change in one or more performance constraint parameters. | 02-05-2015 |
20150040137 | CO-ALLOCATING A RESERVATION SPANNING DIFFERENT COMPUTE RESOURCES TYPES - A system and method of reserving resources in a compute environment are disclosed. The method embodiment comprises receiving a request for resources within a computer environment, determining at least one completion time associated with at least one resource type required by the request, and reserving resources within the computer environment based on the determine of at least the completion time. A scaled wall clock time on a per resource basis may also be used to determine what resources to reserve. The system may determine whether to perform a start time analysis or a completion time analysis or a hybrid analysis in the process of generating a co-allocation map between a first type of resource and a second type of resource in preparation for reserving resources according to the generated co-allocation map. | 02-05-2015 |
20150046927 | Allocating Processor Resources - Disclosed herein is a method of allocating resources of a processor executing a first real-time code component for processing a first sequence of data portions and a second code component for processing a second sequence of data portions. At least the second code component has a configurable complexity. The method comprises estimating a first real-time performance metric for the first code component, and configuring the complexity of the second code component based on the estimated first real-time performance metric. | 02-12-2015 |
20150046928 | SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DYNAMICALLY INCREASING RESOURCES UTILIZED FOR PROCESSING TASKS - Mechanisms and methods are provided for dynamically increasing resources utilized for processing tasks. These mechanisms and methods for dynamically increasing resources utilized for processing tasks can enable embodiments to adjust processing power utilized for task processing. Further, adjusting processing power can ensure that quality of service goals set for processing tasks are achieved. | 02-12-2015 |
20150052535 | INTEGRATED COMPUTER SYSTEM AND ITS CONTROL METHOD - [Object] A computer system in which a server and a storage apparatus are integrated and operated is designed so that a multi-tenant can be realized favorably. | 02-19-2015 |
20150058859 | Deferred Execution in a Multi-thread Safe System Level Modeling Simulation - Methods, systems, and machine readable medium for multi-thread safe system level modeling simulation (SLMS) of a target system on a host system. An example of a SLMS is a SYSTEMC simulation. During the SLMS, SLMS processes are executed in parallel via a plurality of threads. SLMS processes represent functional behaviors of components within the target system, such as functional behaviors of processor cores. Deferred execution may be used to defer execution of operations of SLMS processes that access a shared resource. Multi-thread safe direct memory interface (DMI) access may be used by a SLMS process to access a region of the memory in a multi-thread safe manner. Access to regions of the memory may also be guarded if they are at risk of being in a transient state when being accessed by more than one SLMS process. | 02-26-2015 |
20150058860 | PARALLEL COMPUTER SYSTEM, CONTROLLING METHOD FOR PARALLEL COMPUTER SYSTEM, AND STORAGE MEDIUM STORING CONTROLLING PROGRAM FOR MANAGEMENT APPARATUS - A node state storage unit stores therein information about a free state of each of the computation nodes. A search data storage unit has a data structure in which a state where the X-axis is crossed is developed into a virtual X-axis provided on the right end of the X-axis. By referring to the node state storage unit, a searching unit searches for the number of successive free nodes in the increasing directions of the X-axis including the virtual X-axis and the Y-axis, while using the computation node at each of the X-Y coordinate positions as a starting point, and writes a search result into the search data storage unit. | 02-26-2015 |
20150058861 | CPU SCHEDULER CONFIGURED TO SUPPORT LATENCY SENSITIVE VIRTUAL MACHINES - A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU. | 02-26-2015 |
20150058862 | HIGH PERFORMANCE LOCKS - Systems and methods of enhancing computing performance may provide for detecting a request to acquire a lock associated with a shared resource in a multi-threaded execution environment. A determination may be made as to whether to grant the request based on a context-based lock condition. In one example, the context-based lock condition includes a lock redundancy component and an execution context component. | 02-26-2015 |
20150067694 | CONCURRENT COMPUTING WITH REDUCED LOCKING REQUIREMENTS FOR SHARED DATA - Where data are shared by multiple computer processing threads, modifying the data by determining whether modifying data associated with a first computer processing thread violates a constraint associated with the data, and responsive to determining that modifying the data associated with the computer processing thread violates the constraint associated with the data, using the data associated with the first computer processing thread to modify the data shared by the multiple computer processing threads that includes the first computer processing thread, where the constraint associated with the data associated with the first computer processing thread represents a portion of a tolerance value that is associated with the data shared by the multiple computer processing threads and that is divided among multiple constraints, where each of the constraints is associated with a different one of the multiple computer processing threads. | 03-05-2015 |
20150067695 | INFORMATION PROCESSING SYSTEM AND GRAPH PROCESSING METHOD - The present invention solves the aforementioned problem with a parallel computer system that performs a plurality of processes to each of which a memory space is allocated by arranging information of graph vertices in a first memory space allocated to a first process and arranging edge information of the graph vertices in a second memory space allocated to a second process. | 03-05-2015 |
20150074677 | LOAD ADAPTIVE PIPELINE - A load adaptive pipeline system includes a data recovery pipeline configured to transfer data between a memory and a host. The pipeline includes a plurality of resources, one or more of the plurality of resources in the pipeline have multiple resource components available for allocation. The system includes a pipeline controller configured to assess at least one parameter affecting data transfer through the pipeline. The pipeline controller is configure to allocate resource components to the one or more resources in the pipeline in response to assessment of the at least one data transfer parameter. | 03-12-2015 |
20150074678 | DEVICE AND METHOD FOR AUTOMATING A PROCESS OF DEFINING A CLOUD COMPUTING RESOURCE - An illustrative cloud computing manager device includes data storage and at least one processor that facilitates defining at least one resource to be accessible through the cloud. The processor is configured to identify descriptor information as provided by a user. The descriptor information indicates a plurality of attributes of the resource including any particular attributes that are particular to the resource. The processor is configured to automatically generate at least one of the attributes with a plurality of flavors based on the identified descriptor information. Automatically generating at least one of the attributes with a plurality of flavors according to the illustrative example reduces the amount of time and effort required by an individual who wishes to define the resource for the cloud computing system. | 03-12-2015 |
20150074679 | Dynamic Scaling for Multi-Tiered Distributed Computing Systems - In one embodiment, a method is described. The method includes: monitoring workloads of a plurality of application classes, each of the application classes describing services provided by one or more applications in a multi-tiered system and comprising a plurality of instantiated execution resources; estimating, for each of the application classes, a number of execution resources able to handle the monitored workloads, to simultaneously maintain a multi-tiered system response time below a determined value and minimize a cost per execution resource; and dynamically adjusting the plurality of instantiated execution resources for each of the application classes based on the estimated number of execution resources. | 03-12-2015 |
20150074680 | METHOD AND APPARATUS FOR ASYNCHRONOUS PROCESSOR WITH A TOKEN RING BASED PARALLEL PROCESSOR SCHEDULER - A method of operating a clock-less asynchronous processing system comprising a plurality of successive asynchronous processing components. The method comprises providing a first token signal path in the plurality of processing components to allow propagation of a token through the processing components. Possession of the token by one of the processing components enables the processing component to conduct a transaction with a resource component that is shared among the processing components. The method comprises propagating the token from one processing component to another processing component along the token signal path. | 03-12-2015 |
20150074681 | SCHEDULING PARALLEL DATA TASKS - A method for allocating parallel, independent, data tasks includes receiving data tasks, each of the data tasks having a penalty function, determining a generic ordering of the data tasks according to the penalty functions, wherein the generic ordering includes solving an aggregate objective function of the penalty functions, the method further including determining a schedule of the data tasks given the generic ordering, which packs the data tasks to be performed. | 03-12-2015 |
20150082317 | TECHNIQUES FOR DISTRIBUTED PROCESSING TASK PORTION ASSIGNMENT - Various embodiments are generally directed to techniques for assigning portions of a task among individual cores of one or more processor components of each processing device of a distributed processing system. An apparatus to assign processor component cores to perform task portions includes a processor component; an interface to couple the processor component to a network to receive data that indicates available cores of base and subsystem processor components of processing devices of a distributed processing system, the subsystem processor components made accessible on the network through the base processor components; and a core selection component for execution by the processor component to select cores from among the available cores to execute instances of task portion routines of a task based on a selected balance point between compute time and power consumption needed to execute the instances of the task portion routines. Other embodiments are described and claimed. | 03-19-2015 |
20150082318 | DATACENTER RESOURCE ALLOCATION - Technologies and implementations for allocating datacenter resources are generally disclosed. | 03-19-2015 |
20150089511 | ADAPTIVE PARALLELIZATION FOR MULTI-SCALE SIMULATION - Roughly described, a task control system for managing multi-scale simulations receives a case/task list which identifies cases to be evaluated, at least one task for each of the cases, and dependencies among the tasks. A module allocates available processor cores to at least some of the tasks, constrained by the dependencies, and initiates execution of the tasks on allocated cores. A module, in response to completion of a particular one of the tasks, determines whether or not the result of the task warrants stopping or pruning tasks, and if so, then terminates or prunes one or more of the uncompleted tasks in the case/task list. A module also re-allocates available processor cores to pending not-yet-executing tasks in accordance with time required to complete the tasks and constrained by the dependencies, and initiates execution of the tasks on allocated cores. | 03-26-2015 |
20150095917 | DISTRIBUTED UIMA CLUSTER COMPUTING (DUCC) FACILITY - A system for processing analytics on a cluster of computing resources may receive a user request to process a Job, Service or Reservation, and may include an Orchestrator, Resource Manager, Process Manager, and one or more Agents and Job Drivers, which together deploy the Job onto one or more nodes in the cluster for parallelized processing of Jobs and their associated work items. | 04-02-2015 |
20150095918 | SYSTEM AND METHOD FOR THREAD SCHEDULING ON RECONFIGURABLE PROCESSOR CORES - Systems and methods for efficiently utilizing reconfigurable processor cores. An example processing system includes, for example, a control register comprising a plurality of inhibit bits, each inhibit bit indicating whether a corresponding processor core is allowed to merge with other processor cores; and dynamic core reallocation logic to temporarily merge a first processor core and a second processor core to speed execution of a first thread executed on the first processor core responsive to determining that a second thread executed on the second processor core has completed execution prior to a quantum associated with the second thread being reached and to determining that the inhibit bits indicate that the first and second cores may be merged. | 04-02-2015 |
20150095919 | METHODS AND SYSTEM FOR SWAPPING MEMORY IN A VIRTUAL MACHINE ENVIRONMENT - In this disclosure, techniques are described for more efficiently sharing resources across multiple virtual machine instances. For example, techniques are disclosed for allowing additional virtual machine instances to be supported by a single computing system by more efficiently allocating memory to virtual machine instances by providing page swapping in a virtualized environment and/or predictive page swapping. In one embodiment, a virtual memory manager swaps pages predicatively in and/or out of a paging pool based on information from a central processing unit (“CPU”) scheduler. In one embodiment, the CPU scheduler provides scheduling information for virtual machine instances to the virtual memory manager, where the scheduling information allows the virtual memory manager to determine when a virtual machine is scheduled to become active or inactive. The virtual memory manager can then swap-in or swap-out memory pages. | 04-02-2015 |
20150095920 | DOUBLE PROCESSING OFFLOADING TO ADDITIONAL AND CENTRAL PROCESSING UNITS - A data-processing system (DTS) includes a central hardware unit (CPU) and an additional hardware unit (HW), the central hardware unit (CPU) being adapted to execute a task by a processing thread (T | 04-02-2015 |
20150100968 | Operating Programs on a Computer Cluster - A mechanism is provided for operating programs on a computer cluster comprising cluster resources. The cluster resources comprise non-virtual real hardware resources with variable configurations and virtual resources. Each cluster resource has a configuration description and a type. Each type has a unique type identification and descriptions of operations that can be performed by the cluster resource of the each type. Each program is operable for: requesting usage of the cluster resource specifying the type and the configuration description; and requesting a modification of the variable configuration of the non-virtual real hardware resource with the variable configuration. Execution of each program requires the dedicated execution environment on the computer cluster. The generation of each dedicated execution environment requires one or more dedicated virtual resources and one or more dedicated non-virtual real hardware resources with the variable configurations. | 04-09-2015 |
20150100969 | DETECTING DEPLOYMENT CONFLICTS IN HETEROGENOUS ENVIRONMENTS - Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is detected between the first application and at least a second application. | 04-09-2015 |
20150106820 | METHOD AND APPARATUS FOR PROVIDING ALLOCATING RESOURCES - Various embodiments provide a method and apparatus for allocating resources to processes by using statistical allocation based on the determined maximum average resource demand at any time across all applications (“ | 04-16-2015 |
20150106821 | APPARATUS AND METHOD FOR ALLOCATING MULTIPLE TASKS - An apparatus and method for allocating multiple tasks are disclosed. The apparatus for allocating multiple tasks includes a clustering unit and an allocation unit. The clustering unit clusters tasks, generated when application software (SW) operates in an SW platform, based on the application SW. The allocation unit allocates the clustered tasks to a cluster core corresponding to the application SW and allocates the clustered tasks to a core having a distance of one hop from the cluster core. | 04-16-2015 |
20150106822 | METHOD AND SYSTEM FOR SUPPORTING RESOURCE ISOLATION IN MULTI-CORE ARCHITECTURE - Embodiments of the present invention provide a method and a system for supporting resource isolation in a multi-core architecture. In the method and system for supporting resource isolation in a multi-core architecture provided by the embodiments of the present invention, manners of inter-core operating system isolation, memory segment isolation, and I/O resource isolation are adopted, so that operating systems that run on different processing cores of the multi-core processor can run independently without affecting each other. Therefore, the present invention fully uses the advantages high integration level and low comprehensive costs of the multi-core processor, it is achieved that a failure domain of the multi-core processor remains in a single hard disk, and the multi-core processor has high reliability. | 04-16-2015 |
20150113538 | HIERARCHICAL STAGING AREAS FOR SCHEDULING THREADS FOR EXECUTION - One embodiment of the present invention is a computer-implemented method for scheduling a thread group for execution on a processing engine that includes identifying a first thread group included in a first set of thread groups that can be issued for execution on the processing engine, where the first thread group includes one or more threads. The method also includes transferring the first thread group from the first set of thread groups to a second set of thread groups, allocating hardware resources to the first thread group, and selecting the first thread group from the second set of thread groups for execution on the processing engine. One advantage of the disclosed technique is that a scheduler only allocates limited hardware resources to thread groups that are, in fact, ready to be issued for execution, thereby conserving those resources in a manner that is generally more efficient than conventional techniques. | 04-23-2015 |
20150113539 | METHOD FOR EXECUTING PROCESSES ON A WORKER MACHINE OF A DISTRIBUTED COMPUTING SYSTEM AND A DISTRIBUTED COMPUTING SYSTEM - The invention relates to a method for executing processes, preferably media processes on a worker machine of a distributed computing system, with a plurality of worker machines, comprising the steps of a) Selecting one of the worker machines out of the plurality of worker machines for execution of a process to be executed in the distributed computing system and transferring said process to the selected worker machine, b) Executing the transferred process on the selected worker machine, and c) Removing the executed process from the selected worker machine after finishing of the execution of the process, wherein statistical information of resource usage of the process to be executed on one of the worker machines is collected and that the selection of the worker machine is based on a probability resource usage qualifier, wherein the probability resource usage qualifier is extracted from combined statistical information of the process to be executed and already executed and/or executing processes on the worker machine. The invention relates also to a system and a use. | 04-23-2015 |
20150113540 | ASSIGNING RESOURCES AMONG MULTIPLE TASK GROUPS IN A DATABASE SYSTEM - A computer running a database system receives one or more queries, each query comprised of parallel threads of execution working towards the common goal of completing a user request. These threads are grouped into a schedulable object called a task group. The task groups are placed within a specific multiple tier hierarchy, and database system resources allocated to the task groups according to their placement within the hierarchy. Beginning with the top tier of the hierarchy, resources remaining after allocations to each task group within a tier are passed to the next lower tier for allocation. | 04-23-2015 |
20150121390 | CONDITIONAL SERIALIZATION TO IMPROVE WORK EFFORT - In some embodiments of this disclosure, a computer-implemented method includes requesting, by a first thread on a computer system, conditional exclusive access to a first resource for updating the first resource to perform a first task. An indication is received that the requested exclusive access to the first resource is currently unavailable. Unconditional shared access to the first resource is requested after receiving the indication that the requested exclusive access is unavailable. The shared access to the first resource is received. The first resource is used, by a computer processor, through the shared access to perform the first task in lieu of the requested exclusive access. | 04-30-2015 |
20150121391 | METHOD AND DEVICE FOR SCHEDULING MULTIPROCESSOR OF SYSTEM ON CHIP (SOC) - Provided are a method and apparatus for scheduling multiple processors of a system on chip (SOC). The method includes: after receiving a task which is required to be executed, a main central processing unit (CPU) of a system on chip (SOC) obtaining a dynamic execution parameter of the task (S | 04-30-2015 |
20150121392 | SCHEDULING IN JOB EXECUTION - The present invention relates to a method, apparatus, and computer program product for scheduling in job execution. According to embodiments of the present invention, there is provided a method for scheduling a plurality of job slots shared by one or more pre-processors and one or more post-processors in job execution, wherein the data generated by the pre-processor(s) will be fed to the post-processor(s) for processing. The method comprises: determining an overall data generation speed of the pre-processor(s); determining an overall data consumption speed of the post-processor(s); and scheduling allocation of at least one of the job slots between the pre-processor(s) and the post-processor(s) based on the overall data generation speed and the overall data consumption speed. Corresponding apparatus is disclosed as well. | 04-30-2015 |
20150121393 | METHODS AND APPARATUS FOR SOFTWARE CHAINING OF CO-PROCESSOR COMMANDS BEFORE SUBMISSION TO A COMMAND QUEUE - Methods and apparatus of interleaving two or more workloads are presented herein. The methods and apparatus may comprise a schedule controller and a coprocessor. The schedule controller is operative to utilize the first storage unit to manage context stored therein that allows for the coprocessor to interleave the two or more workloads that can be directly supported by the first storage unit. The coprocessor includes a dedicated first storage unit and an engine. | 04-30-2015 |
20150121394 | APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT FOR SOLUTION PROVISIONING - In one embodiment, a method for solution provisioning includes establishing a provisioning task, and obtaining a provisioning image for the provisioning task from a hardware memory. A provisioning implementer is configured based on the obtained provisioning image. The provisioning image comprises configuration information used for executing installation, scripts for executing installation, and information for mapping the configuration information to the scripts. In another embodiment, an apparatus for solution provisioning includes a hardware processor, and a task manager running on the hardware processor. The task manager is configured to establish a provisioning task and obtain a provisioning image for the provisioning task. The task manager configures a provisioning implementer based on the provisioning image obtained. The provisioning image comprises configuration information used for executing installation, scripts for executing installation, and information for mapping the configuration information to the scripts. | 04-30-2015 |
20150121395 | Method And Apparatus For Managing Processing Thread Migration Between Clusters Within A Processor - A method, and corresponding apparatus, of managing processing thread migrations within a plurality of memory clusters, includes embedding, in memory components of the plurality of memory clusters, instructions indicative of processing thread migrations; storing, in one or more memory components of a particular memory cluster among the plurality of memory clusters, data configured to designate the particular memory cluster as a sink memory cluster, the sink memory cluster preventing an incoming migrated processing thread from migrating out of the sink memory cluster; and processing one or more processing threads, in one or more of the plurality of memory clusters, in accordance with at least one of the embedded migration instructions and the data stored in the one or more memory components of the sink memory cluster. | 04-30-2015 |
20150128144 | DATA PROCESSING APPARATUS AND METHOD FOR PROCESSING A PLURALITY OF THREADS - A data processing apparatus has processing circuitry for processing threads each having thread state data. The threads may be processed in thread groups, with each thread group comprising a number of threads processed in parallel with a common program executed for each thread. Several thread state storage regions are provided with fixed number of thread state entries for storing thread state data for a corresponding thread. At least two of the storage regions have different fixed numbers of entries. The processing circuitry processes as the same thread group threads having thread state data stored in the same storage region and processes threads having thread state data stored in different storage regions as different thread groups. | 05-07-2015 |
20150128145 | SYSTEM AND METHOD FOR ROUTING WORK REQUESTS TO MINIMIZE ENERGY COSTS IN A DISTRIBUTED COMPUTING SYSTEM - A system for automated routing or work requests is provided. Particularly, a system for routing work requests in a distributed computing system to minimize an energy cost associated with operating the system is provided. A resource utilization module configured to receive resource utilization information; the resource utilization information including indications of utilization corresponding to a plurality of computing resources is disclosed. Furthermore, an energy consumption module configured to receive energy consumption information; the energy consumption information including indications of energy consumption corresponding to the plurality of computing resources is disclosed. Additionally, a routing module configured to route a work request to one of the plurality of computing resources based at least in part on the received utilization information and the received energy consumption information to minimize energy costs of the plurality of computing resources is disclosed. | 05-07-2015 |
20150128146 | ADJUSTING PAUSE-LOOP EXITING WINDOW VALUES - In a method for adjusting a Pause-loop exiting window value, one or more processors execute an exit instruction for a first virtual CPU (vCPU) in a virtualized computer environment based on the first vCPU exceeding a first Pause-loop exiting (PLE) window value. The one or more processors initiate a first directed yield from the first vCPU to a second vCPU in the virtualized computer environment. The one or more processors determine whether the first directed yield was successful. The one or more processors adjust the first PLE window value based on the determination of whether the first directed yield was successful. | 05-07-2015 |
20150128147 | MODIFIED JVM WITH MULTI-TENANT APPLICATION DOMAINS AND MEMORY MANAGEMENT - A method and system for operating a modified JAVA Virtual Machine (JVM) which is able to simultaneously host multiple JAVA application programs, are disclosed. The JVM is modified to maintain a computer record of one or more application domains, each having one or more classes. For each application domain a first utilization count of the total memory volume in bytes occupied by all allocated instances of the application class, is maintained. Preferably this count is incremented with each new instance of an application class, and decremented during or alter each garbage collection event which reclaims allocated application classes | 05-07-2015 |
20150128148 | CONTROL DEVICE, PROCESSING DEVICE, AND INFORMATION PROCESSING METHOD - There is provided a control device including an allocation unit configured to allocate processing of tasks to any of respective processing devices on the basis of contents of the tasks and at least any of attributes and states of the processing devices. | 05-07-2015 |
20150135186 | COMPUTER SYSTEM, METHOD AND COMPUTER-READABLE STORAGE MEDIUM FOR TASKS SCHEDULING - A computer system is provided. The computer system includes multiple computing devices and a processing unit. The processing unit comprises a device monitoring module, a task classifying module and a task scheduling module. The processing unit is coupled to the computing devices. The device monitoring module is configured to monitor the computing devices so as to obtain loading data. The task classifying module is configured to classify related tasks of multiple tasks as a first group, to classify independent tasks of multiple tasks as a second group and to find a critical path of the related tasks in the first group. The task scheduling module is configured to set a first processing schedule of the first group according to the critical path and the loading data and to set a second processing schedule of the second group according to the first processing schedule. | 05-14-2015 |
20150135187 | METHOD FOR MONITORING RESOURCES IN COMPUTING DEVICE, AND COMPUTING DEVICE - A resource monitoring method comprises the steps of: generating process information including a process identifier, process user, process name, CPU usage, and IO usage; determining at least one first process having the same process user and process name from among a plurality of processes currently being executed; | 05-14-2015 |
20150135188 | SYSTEM AND METHOD FOR CONTROLLING EXECUTION OF JOBS PERFORMED BY PLURAL INFORMATION PROCESSING DEVICES - A system includes a plurality of information processing devices and a management device configured to manage execution of jobs performed by the plurality of information processing devices. The management device detects any one of the plurality of information processing devices which is executing a first job, at a predetermined time, and determines whether a second information processing device different from the first information processing device is able to be allocated to a second job which is scheduled to use the first information processing device being used by the first job after the predetermined time, among the plurality of information processing devices. The management device modifies an execution schedule of the jobs such that the second job is executed using the second information processing device when it is determined that the second information processing device is able to be allocated to the second job. | 05-14-2015 |
20150135189 | SOFTWARE-BASED THREAD REMAPPING FOR POWER SAVINGS - On a multi-core processor that supports simultaneous multi-threading, the power state for each logical processor is tracked. Upon indication that a logical processor is ready to transition into a deep low power state, software remapping (e.g., thread-hopping) may be performed. Accordingly, if multiple logical processors, on different cores, are in a low-power state, they are re-mapped to same core and the core is then placed into a low power state. Other embodiments are described and claimed. | 05-14-2015 |
20150135190 | Method and Devices for Dynamic Management of a Server Application on a Server Platform - Method, devices and computer programs for a dynamic management of a first server application on a first server platform of a telecommunication system are disclosed wherein a further server application is operating or installable on the first server platform or a further server platform. The first server platform has a maximum processing capacity and a capacity fraction of the maximum processing capacity is assignable to the first server application reserving the capacity fraction for processing the first server application. A determination of a required processing capacity for processing at least one of the first server application and the further server application, an analysis of the required processing capacity for an assignment of the capacity fraction to the first server application, and an assignment of the capacity fraction are performed. | 05-14-2015 |
20150143380 | SCHEDULING WORKLOADS AND MAKING PROVISION DECISIONS OF COMPUTER RESOURCES IN A COMPUTING ENVIRONMENT - Embodiments of the present invention disclose a computer-implemented method, computer program product, and system for workload scheduling and resource provisioning. In one embodiment, in accordance with the present invention, the computer implemented method includes the steps of scheduling a set of pending workloads for execution on computer resources in a computing environment; identifying a workload in the set of pending workloads that is scheduled to utilize hypothetic resources, wherein hypothetic resources are idle computer resources that are currently not available, but can be made available to execute workloads through provisioning actions; holding the identified workload from dispatch to hypothetic resources for a holding period, wherein the holding period is a customizable duration of time; provisioning the hypothetic resources corresponding to computer resource requirements of the identified workload; determining whether the provisioned hypothetic resources have become available during the holding period. | 05-21-2015 |
20150143381 | COMPUTING SESSION WORKLOAD SCHEDULING AND MANAGEMENT OF PARENT-CHILD TASKS - A single workload scheduler schedules sessions and tasks having a tree structure to resources, wherein the single workload scheduler has scheduling control of the resources and the tasks of the parent-child workload sessions and tasks. The single workload scheduler receives a request to schedule a child session created by a scheduled parent task that when executed results in a child task; the scheduled parent task is dependent on a result of the child task. The single workload scheduler receives a message from the scheduled parent task yielding a resource based on the resource not being used by the scheduled parent task, schedules tasks to backfill the resource, and returns the resource yielded by the scheduled parent task to the scheduled parent task based on receiving a resume request from the scheduled parent task or determining dependencies of the scheduled parent task have been met. | 05-21-2015 |
20150143382 | SCHEDULING WORKLOADS AND MAKING PROVISION DECISIONS OF COMPUTER RESOURCES IN A COMPUTING ENVIRONMENT - Embodiments of the present invention disclose a computer-implemented method, computer program product, and system for workload scheduling and resource provisioning. In one embodiment, in accordance with the present invention, the computer implemented method includes the steps of scheduling a set of pending workloads for execution on computer resources in a computing environment; identifying a workload in the set of pending workloads that is scheduled to utilize hypothetic resources, wherein hypothetic resources are idle computer resources that are currently not available, but can be made available to execute workloads through provisioning actions; holding the identified workload from dispatch to hypothetic resources for a holding period, wherein the holding period is a customizable duration of time; provisioning the hypothetic resources corresponding to computer resource requirements of the identified workload; determining whether the provisioned hypothetic resources have become available during the holding period. | 05-21-2015 |
20150143383 | APPARATUS AND JOB SCHEDULING METHOD THEREOF - An apparatus and a job scheduling method are provided. For example, the apparatus is a multi-core processing apparatus. The apparatus and method minimize performance degradation of a core caused by sharing resources by dynamically managing a maximum number of jobs assigned to each core of the apparatus. The apparatus includes at least one core including an active cycle counting unit configured to store a number of active cycles and a stall cycle counting unit configured to store a number of stall cycles and a job scheduler configured to assign at least one job to each of the at least one core, based on the number of active cycles and the number of stall cycles. When the ratio of the number of stall cycles to a number of active cycles for a core is too great, the job scheduler assigns fewer jobs to that core to improve performance. | 05-21-2015 |
20150150019 | SCHEDULING COMPUTING TASKS FOR MULTI-PROCESSOR SYSTEMS - In an example embodiment, one or more series of executable components may be configured to execute a respective processes, and one or more corresponding scheduling components may be configured to direct migration of each of the corresponding one or more series of executable components to a processing element thereof. | 05-28-2015 |
20150150020 | SYSTEM AND METHOD FACILITATING PERFORMANCE PREDICTION OF MULTI-THREADED APPLICATION IN PRESENCE OF RESOURCE BOTTLENECKS - The present disclosure generally relates to a system and method for predicting performance of a multi-threaded application, and particularly, to a system and method for predicting performance of the multi-threaded application in the presence of resource bottlenecks. In one embodiment, a system for predicting performance of a multi-threaded software application is disclosed. The system may include one or more processors and a memory storing processor-executable instructions for configuring a processor to: represent one or more queuing networks corresponding to resources, the resources being employed to run the multi-threaded application; detect, based on the one or more queuing networks, a concurrency level associated with encountering of a first resource bottleneck; determine, based on the concurrency level, performance metrics associated with the multi-threaded application; and predict the performance of the multi-threaded application based on the performance metrics. | 05-28-2015 |
20150293789 | SYSTEM AND METHOD FOR PROVIDING OBJECT TRIGGERS - The present invention provides for systems and methods of dynamically controlling a cluster or grid environment. The method comprises attaching a trigger to an object and firing the trigger based on a trigger attribute. The cluster environment is modified by actions initiated when the trigger is fired. Each trigger has trigger attributes that govern when it is fired and actions it will take. The use of triggers enables a cluster environment to dynamically be modified with arbitrary actions to accommodate needs of arbitrary objects. Example objects include a compute node, compute resources, a cluster, groups of users, user credentials, jobs, resources managers, peer services and the like. | 10-15-2015 |
20150293791 | DATA PROCESSING WORK ALLOCATION - A method, system, and/or computer program product allocates computer processing work. One or more processors identify: an input data that is stored in a first computer for processing by a computer program; a virtual machine, stored in a second computer, that is capable of executing the computer program; a first set of constraint rules against moving the input data from the first computer; and a second set of constraint rules against moving the virtual machine from the second computer. The one or more processors assign a weight to each constraint rule, and sum the weight of all constraint rules that are applicable. In response to the first total constraint rule weight exceeding the second total constraint rule weight, movement of the input data from the first computer to the second computer is prohibited and the virtual machine is moved from the second computer to the first computer. | 10-15-2015 |
20150293792 | PICOENGINE MULTI-PROCESSOR WITH TASK ASSIGNMENT - A general purpose PicoEngine Multi-Processor (PEMP) includes a hierarchically organized pool of small specialized picoengine processors and associated memories. A stream of data input values is received onto the PEMP. Each input data value is characterized, and from the characterization a task is determined. Picoengines are selected in a sequence. When the next picoengine in the sequence is available, it is then given the input data value along with an associated task assignment. The picoengine then performs the task. An output picoengine selector selects picoengines in the same sequence. If the next picoengine indicates that it has completed its assigned task, then the output value from the selected picoengine is output from the PEMP. By changing the sequence used, more or less of the processing power and memory resources of the pool is brought to bear on the incoming data stream. The PEMP automatically disables unused picoengines and memories. | 10-15-2015 |
20150293794 | PROCESSING METHOD FOR A MULTICORE PROCESSOR AND MULTICORE PROCESSOR - The present invention relates to a multicore processor | 10-15-2015 |
20150301863 | Allocating Resources to Threads Based on Speculation Metric - Methods, reservation stations and processors for allocating resources to a plurality of threads based on the extent to which the instructions associated with each of the threads are speculative. The method comprises receiving a speculation metric for each thread at a reservation station. Each speculation metric represents the extent to which the instructions associated with a particular thread are speculative. The more speculative an instruction, the more likely the instruction has been incorrectly predicted by a branch predictor. The reservation station then allocates functional unit resources (e.g. pipelines) to the threads based on the speculation metrics and selects a number of instructions from one or more of the threads based on the allocation. The selected instructions are then issued to the functional unit resources. | 10-22-2015 |
20150301864 | RESOURCE ALLOCATION METHOD - A resource allocation method adapted to a mobile device having a multi-core central processing unit (CPU) is provided. The CPU executes at least one application. The method includes steps as follows. A usage status of each of the at least one application is obtained according to a level of concern of a user for each of the at least one application. A sensitivity of at least one thread of each of the at least one application is determined according to the usage status of each of the at least one application. Resources of the CPU are allocated according to the sensitivity of the at least one thread run by the cores. | 10-22-2015 |
20150301865 | HARDWARE RESOURCE ALLOCATION FOR APPLICATIONS - In some examples, in a virtual environment, multiple virtual machines may be executing on a physical computing node. Each of the multiple virtual machines may host one or more applications, each of which utilizes at least a portion of a hardware resource of the physical computing node. A hypervisor of the virtual environment may be configured to recognize utilization patterns of the applications and allocate portions of the hardware resource to each of the applications in accordance with respective utilization patterns of the applications. | 10-22-2015 |
20150301871 | BUSY LOCK AND A PASSIVE LOCK FOR EMBEDDED LOAD MANAGEMENT - Embodiments relate to managing exclusive control of a shareable resource between a plurality of concurrently executing threads. An aspect includes determining the number of concurrently executing threads waiting for exclusive control of the shareable resource. Another aspect includes, responsive to a determination that the number of concurrently executing threads waiting for exclusive control of the shareable resource exceeds a pre-determined value, one or more of said concurrently executing threads terminating its wait for exclusive control of the shareable resource. Another aspect includes, responsive to a determination that the number of concurrently executing threads waiting for exclusive control of the shareable resource is less than a pre-determined value, one or more of said one or more concurrently executing threads which terminated its wait for exclusive control of the shareable resource, restarting a wait for exclusive control of the shareable resource. | 10-22-2015 |
20150309840 | AUTOMATED CAPACITY PROVISIONING METHOD USING HISTORICAL PERFORMANCE DATA - The method may include collecting performance data relating to processing nodes of a computer system which provide services via one or more applications, analyzing the performance data to generate an operational profile characterizing resource usage of the processing nodes, receiving a set of attributes characterizing expected performance goals in which the services are expected to be provided, and generating at least one provisioning policy based on an analysis of the operational profile in conjunction with the set of attributes. The at least one provisioning policy may specify a condition for re-allocating resources associated with at least one processing node in a manner that satisfies the performance goals of the set of attributes. The method may further include re-allocating, during runtime, the resources associated with the at least one processing node when the condition of the at least one provisioning policy is determined as satisfied. | 10-29-2015 |
20150309842 | Core Resource Allocation Method and Apparatus, and Many-Core System - A core resource allocation method and apparatus, and a many-core system for allocating core resources of the many-core system are disclosed. In the method, after acquiring a quantity of idle cores needed for a user process, an execution core of the many-core system determine at least two scattered core partitions meeting the quantity, where each core partition is a set of one or multiple cores, and all cores in each core partition are idle cores. Then, the execution core combines the at least two scattered core partitions to form one continuous core partition, and allocates the formed continuous core partition to the user process. In this way, process interaction can be directly performed between different cores in a continuous core partition allocated to a user process, thereby improving efficiency of communication between processes. Furthermore, a waste of core resources can be effectively avoided. | 10-29-2015 |
20150317187 | PLACING OBJECTS ON HOSTS USING HARD AND SOFT CONSTRAINTS - Objects are placed on hosts using hard constraints and soft constraints. The objects to be placed on the host may be many different types of objects. For example, the objects to place may include tenants in a database, virtual machines on a physical machine, databases on a virtual machine, tenants in directory forests, tenants in farms, and the like. When determining a host for an object, a pool of hosts is filtered through a series of hard constraints. The remaining pool of hosts is further filtered through soft constraints to help in selection of a host. A host is then chosen from the remaining hosts. | 11-05-2015 |
20150317188 | SERVICE RESOURCE ALLOCATION - Disclosed are various embodiments for a resource allocation application. Usage data for application program interfaces is aggregated over time. Limits for an allocation of resources for each of the application program interfaces are calculated as a function of the usage data. Limits are recalculated as new application program interfaces are added. | 11-05-2015 |
20150317189 | APPLICATION EXECUTION CONTROLLER AND APPLICATION EXECUTION METHOD - A controller to instruct execution in an environment of plural computing resources. The controller comprising: an information collecting unit to collect available resource information of computing resources available to execute an application indicating an amount and/or type of computing resource available in categories of computing resource; scalability information including an indication of application execution rate; and performance target information including an indication of performance targets. The controller further comprises: a configuration selection unit to select a configuration which will come closest to meeting, the performance targets; and an instructing unit to instruct the execution of the application using the selected configuration. | 11-05-2015 |
20150324229 | PROPAGATION OF TASK PROGRESS THROUGH THE USE OF COALESCED TIME INTERVALS - Approaches are provided for calculating a corresponding date of progress towards completion of a task regardless of a quantity being used to track the progress. An approach includes enumerating a list of time intervals for each sub-task of at least one summary task. The approach further includes distributing a progress value over a duration of each sub-task. The approach further includes creating, by at least one computing device, a coalesced set of time intervals for the at least one summary task based on the list of time intervals enumerated for each sub-task. The approach further includes traversing the coalesced set of time intervals and accumulating portions of the progress value until a required progress is obtained. The approach further includes determining a date of progress for the at least one summary task based on the accumulated portions of the progress value. | 11-12-2015 |
20150324233 | DATA STORAGE RESOURCE ALLOCATION USING BLACKLISTING OF RESOURCE REQUEST POOLS SUCH AS CATEGORIES OF DATA STORAGE REQUESTS - A resource allocation system begins with an ordered plan for matching requests to resources that is sorted by priority. The resource allocation system optimizes the plan by determining those requests in the plan that will fail if performed. The resource allocation system removes or defers the determined requests. In addition, when a request that is performed fails, the resource allocation system may remove requests that require similar resources from the plan. Moreover, when resources are released by a request, the resource allocation system may place the resources in a temporary holding area until the resource allocation returns to the top of the ordered plan so that lower priority requests that are lower in the plan do not take resources that are needed by waiting higher priority requests higher in the plan. | 11-12-2015 |
20150324234 | TASK SCHEDULING METHOD AND RELATED NON-TRANSITORY COMPUTER READABLE MEDIUM FOR DISPATCHING TASK IN MULTI-CORE PROCESSOR SYSTEM BASED AT LEAST PARTLY ON DISTRIBUTION OF TASKS SHARING SAME DATA AND/OR ACCESSING SAME MEMORY ADDRESS(ES) - A task scheduling method for a multi-core processor system includes at least the following steps: when a first task belongs to a thread group currently in the multi-core processor system, where the thread group has a plurality of tasks sharing same specific data and/or accessing same specific memory address(es), and the tasks comprise the first task and at least one second task, determining a target processor core in the multi-core processor system based at least partly on distribution of the at least one second task in at least one run queue of at least one processor core in the multi-core processor system, and dispatching the first task to a run queue of the target processor core. | 11-12-2015 |
20150324237 | System and Method for Limiting the Impact of Stragglers in Large-Scale Parallel Data Processing - A large-scale data processing system and method including a plurality of processes, wherein a master process assigns input data blocks to respective map processes and partitions of intermediate data are assigned to respective reduce processes. In each of the plurality of map processes an application-independent map program retrieves a sequence of input data blocks assigned thereto by the master process and applies an application-specific map function to each input data block in the sequence to produce the intermediate data and stores the intermediate data in high speed memory of the interconnected processors. Each of the plurality of reduce processes receives a respective partition of the intermediate data from the high speed memory of the interconnected processors while the map processes continue to process input data blocks an application-specific reduce function is applied to the respective partition of the intermediate data to produce output values. | 11-12-2015 |
20150324239 | DYNAMIC LOAD BALANCING OF HARDWARE THREADS IN CLUSTERED PROCESSOR CORES USING SHARED HARDWARE RESOURCES, AND RELATED CIRCUITS, METHODS, AND COMPUTER-READABLE MEDIA - Dynamic load balancing of hardware threads in clustered processor cores using shared hardware resources, and related circuits, methods, and computer readable media are disclosed. In one aspect, a dynamic load balancing circuit comprising a control unit is provided. The control unit is configured to determine whether a suboptimal load condition exists between a first cluster and a second cluster of a clustered processor core. If a suboptimal load condition exists, the control unit is further configured to transfer a content of private register(s) of a first hardware thread of the first cluster to private register(s) of a second hardware thread of the second cluster via shared hardware resources of the first hardware thread and the second hardware thread. The control unit is also configured to exchange a first identifier associated with the first hardware thread with a second identifier associated with the second hardware thread via the shared hardware resources. | 11-12-2015 |
20150331719 | APPARATUS AND JOB SCHEDULING METHOD THEREOF - An apparatus and a job scheduling method are provided. For example, the apparatus is a multi-core processing apparatus. The apparatus and method minimize performance degradation of a core caused by sharing resources by dynamically managing a maximum number of jobs assigned to each core of the apparatus. The apparatus includes at least one core including an active cycle counting unit configured to store a number of active cycles and a stall cycle counting unit configured to store a number of stall cycles and a job scheduler configured to assign at least one job to each of the at least one core, based on the number of active cycles and the number of stall cycles. When the ratio of the number of stall cycles to a number of active cycles for a core is too great, the job scheduler assigns fewer jobs to that core to improve performance. | 11-19-2015 |
20150331720 | MULTI-THREADED, LOCKLESS DATA PARALLELIZATION - In general, techniques are described for parallelizing a high-volume data stream using a data structure that enables lockless access by a multi-threaded application. In some examples, a multi-core computing system includes an application that concurrently executes multiple threads on cores of the system. The multiple threads include one or more send threads each associated with a different lockless data structure that each includes both a circular buffer and a queue. One or more receive threads serially retrieve incoming data from a data stream or input buffer, copy data blocks to one of the circular buffers, and push metadata for the copied data blocks to the queue. Each of the various send threads, concurrent to the operation of the receive threads, dequeues the next metadata from its associated queue, reads respective blocks of data from its associated circular buffers based on metadata information, and offloads the block to a server. | 11-19-2015 |
20150331721 | PROCESS MIGRATION METHOD, COMPUTER SYSTEM AND COMPUTER PROGRAM - A process migration method comprising executing a computer program using a group of parallel processes, each process carrying out a computation, the execution using current computing resources to provide current group data as a result of the computations, deciding to change the resources, and making a choice between increasing the resources; decreasing the resources; and moving to different resources, wherein moving to different resources can include increase, decrease or maintenance of the resources. The method comprising communication between the current computing resources and changed computing resources to allow the program to execute on the changed resources, the communication comprising migration of the execution to changed resources and synchronization of migrated group data with the current group data; wherein execution using the current resources overlaps in time with the communication. | 11-19-2015 |
20150331723 | Mobile Device Workload Management for Cloud Computing Using SIP And Presence To Control Workload And Method Thereof - A method is implemented in a computer infrastructure having computer executable code tangibly embodied on a computer readable storage medium having programming instructions. The programming instructions are operable to manage workload for cloud computing by transferring workload to at least one mobile device using Session Initiation Protocol (SIP). | 11-19-2015 |
20150331724 | WORKLOAD BALANCING TO HANDLE SKEWS FOR BIG DATA ANALYTICS - Data partitions are assigned to reducer tasks using a cost-based and workload balancing approach. At least one of the initial data partitions remains unassigned in an unassigned partitions pool. Each reducer while working on its assigned partitions makes dynamic run-time decisions as to whether to: reassign a partition to another reducer, accept a partition from another reducer, select a partition from the unassigned partitions pool, and/or reassign a partition back to the unassigned partitions pool. | 11-19-2015 |
20150339128 | MICROVISOR RUN TIME ENVIRONMENT OFFLOAD PROCESSOR - Embodiments here include systems and methods for running an application via a microvisor processor in communication with a memory and a storage is disclosed. For example, one method includes installing an application. The method also includes identifying an operating system that the application is configured to execute within. The method also includes identifying a resource required by the application to execute, wherein the resource is part of the operating system. The method also includes identifying a location of the resource in the storage. The method also includes retrieving the resource from the storage. The method also includes bundling the application and the resource in the memory. The method also includes executing the application using the resource. | 11-26-2015 |
20150339163 | DEVICES AND METHODS FOR CONTROLLING OPERATION OF ARITHMETIC AND LOGIC UNIT - In various embodiments, devices and methods for controlling the operation of at least one arithmetic and logic unit are disclosed. In various embodiments, methods and devices for controlling an arithmetic logic unit are disclosed. More particularly, devices may comprise at least one arithmetic and logic unit for processing a task, and a processor for controlling the arithmetic and logic unit according to an electric current consumed by the arithmetic and logic unit at an operating frequency of the arithmetic and logic unit. | 11-26-2015 |
20150339164 | SYSTEMS AND METHODS FOR MANAGING SPILLOVER LIMITS IN A MULTI-CORE SYSTEM - The present disclosure is directed to a system for managing spillover via a plurality of cores of a multi-core device intermediary to a plurality of clients and one or more services. The system may include a device intermediary to a plurality of clients and one or more services. The system may include a spillover limit of a resource. The device may also include a plurality of packet engines operating on a corresponding core of a plurality of cores of the device. The system may include a pool manager allocating to each of the plurality of packet engines a number of resource uses from an exclusive quota pool and shared quota pool based on the spillover limit. The device may also include a virtual server of a packet engine of the plurality of packet engines. The virtual server manages client requests to one or more services. The device determines that the number of resources used by a packet engine of the plurality of packet engine has reached the allocated number of resource uses of the packet engine, and responsive to the determination, forwards to a backup virtual server a request of a client of the plurality of clients received by the device for the virtual server. | 11-26-2015 |
20150339168 | WORK QUEUE THREAD BALANCING - Various embodiments are directed to systems and methods for work queue thread balancing. A global thread pool manager may be configured to receive a request to add a work item to a constituent work queue. The constituent work queue may be described by a work queue thread property. The global thread pool manager may add the work item to the constituent work queue and match the work item to a global thread selected from a global thread pool. The global thread may be configured according to the work queue thread property to generate a configured global thread. The configured global thread may execute the work item. | 11-26-2015 |
20150339169 | REACTIVE AUTO-SCALING OF CAPACITY - Examples of systems and methods are described for managing computing capacity by a provider of computing resources. The computing resources may include program execution capabilities, data storage or management capabilities, network bandwidth, etc. Multiple user programs can consume a single computing resource, and a single user program can consume multiple computing resources. Changes in usage and other environmental factors can require scaling of the computing resources to reduce or prevent a negative impact on performance. In some implementations, a fuzzy logic engine can be used to determine the appropriate adjustments to make to the computing resources associated with a program in order to keep a system metric within a desired operating range. | 11-26-2015 |
20150339170 | Method for Allocating Processor Resources Precisely by Means of Predictive Scheduling Based on Current Credits - The present invention discloses a method for allocating processor resources precisely by means of predictive scheduling based on current credits, wherein the run queue of the Credit scheduler comprises VCPUs with UNDER priority located at the head of the queue, VCPUs with OVER priority, VCPUs with IDLE priority located at the end of the queue and a wait queue for saving all VCPUs with overdrawn credits. Based on credit values of VCPUs, the method predicts the time of the credit overdrawing, and sets a timer which is triggered after the time to notify the Credit scheduler to stop scheduling corresponding VCPU. Thus the method effectively controls credit consumption and achieves the object of precise allocation of processor resources. The method is suitable to multi-core environment, and is also capable of reserving the advantages of the existing Credit scheduler, which are quick response for small task loads and load balancing. | 11-26-2015 |
20150339172 | TASK MANAGEMENT ON COMPUTING PLATFORMS - Technologies are generally described for task management on computing platforms. In some examples, a method performed under control of a task management system may include determining a relationship among a plurality of tasks executed on a first platform to identify one or more associated tasks; identifying at least one attribute of each of the one or more associated tasks; generating a job that includes the one or more associated tasks and the identified at least one attribute of each of the one or more associated tasks; and instantiating, on a second platform, the one or more associated tasks included in the job based on the at least one attribute of each of the one or more associated tasks, in response to a request to launch the job. | 11-26-2015 |
20150347184 | METHOD FOR TASK GROUP MIGRATION AND ELECTRONIC DEVICE SUPPORTING THE SAME - Provided is a method for task migration in an electronic device. The method includes: assigning a task with a specific function to a first group among groups formed according to a preset criterion; assigning the first group to one of first processing units functionally connected to the electronic device; and migrating, when the first group matches preset criteria of second processing units functionally connected to the electronic device, the first group to one of the second processing units. Based on this, it is possible to create various other embodiments. | 12-03-2015 |
20150347190 | SYSTEM AND METHOD FOR COORDINATING PROCESS AND MEMORY MANAGEMENT ACROSS DOMAINS - A method at a computing device having a plurality of concurrently operative operating systems, the method comprising: operating a proxy process within a target operating system on the computing device; receiving, from an originating operating system, a request for resources from a target process within the target operating system at the proxy process; requesting, from the proxy process, the resources of the target process; and returning a handle to the target process from the proxy process to the originating operating system. | 12-03-2015 |
20150347191 | SYSTEM AND METHOD FOR DYNAMIC RESCHEDULING OF MULTIPLE VARYING RESOURCES WITH USER SOCIAL MAPPING - A system and method for scheduling resources includes a memory storage device having a resource data structure stored therein which is configured to store a collection of available resources, time slots for employing the resources, dependencies between the available resources and social map information. A processing system is configured to set up a communication channel between users, between a resource owner and a user or between resource owners to schedule users in the time slots for the available resources. The processing system employs social mapping information of the users or owners to assist in filtering the users and owners and initiating negotiations for the available resources. | 12-03-2015 |
20150347262 | PERFORMANCE MANAGEMENT BASED ON RESOURCE CONSUMPTION - A method and apparatus of a device for performance management by terminating application programs that consume an excessive amount of system resources is described. The device receives a resource consumption threshold and a detection period. The device further monitors a resource usage of an application program. The device determines whether the resource usage of the application program exceeds the resource consumption threshold for the detection period. The device further terminates the application program when the resource usage exceeds the resource consumption threshold for the detection period. | 12-03-2015 |
20150355943 | WEIGHTED STEALING OF RESOURCES - In a computer system with multiple job queues and limited resources, an initial allocation of resources is given to each job queue. The utilization of these initially allocated resources is monitored, and queues with excess resources may have those resources stolen and temporarily redistributed to queues with unmet resource needs. | 12-10-2015 |
20150355944 | USING FUNCTIONAL RESOURCES OF A COMPUTING DEVICE WITH WEB-BASED PROGRAMMATIC RESOURCES - A request is received from a web-based programmatic resource executing within an application that is installed on the computing device. From the request, one or more functional resources of the computing device are identified. The functional resources are not otherwise accessible to the web-based programmatic resource executing within the installed application on the computing device. A task is performed using the identified one or more functional resources. | 12-10-2015 |
20150355945 | Adaptive Scheduling Policy for Jobs Submitted to a Grid - Machines, systems and methods for providing a job description for execution in a computing environment, the method comprising receiving a job description, wherein the job description defines a set of job alternatives based on an order of priority and conditions associated with execution of the job alternatives; processing the job alternatives to determine whether resources for executing at least a first job alternative are available, considering respective first conditions defined in the job description for the first job alternative; selecting a first computing element implemented in a virtualized computing environment, wherein the selected first computing element has sufficient resources to satisfy resource requirements defined in the job description for the first job alternative; and submitting the job to the first computing element for execution. | 12-10-2015 |
20150355946 | "Systems of System" and method for Virtualization and Cloud Computing System - A “systems of system” and method for virtualization and cloud computing system are disclosed. According to one embodiment FIG. | 12-10-2015 |
20150355947 | RESOURCE PROVISIONING BASED ON LOGICAL PROFILES AND PIECEWISE OBJECTIVE FUNCTIONS - Described are techniques for selecting resources for provisioning. A usage definition, including a piecewise objective function, and first set of logical profiles based on core criteria are selected. Each of the logical profiles in the first set represents a resource set characterized by a core criteria value set that specifies values for the core criteria. A second set of resulting objective function values are determined by evaluating one piece of the objective function for each of the logical profiles in the first set. A highest ranked one of the resulting objective function values in the second set is selected having a corresponding first logical profile of the first set and a corresponding core criteria value set. A third set of resources is selected which is characterized by the corresponding core criteria value set for the first logical profile. The third set of resources is any of recommended or selected for provisioning. | 12-10-2015 |
20150355948 | DYNAMICALLY CONFIGURABLE HARDWARE QUEUES FOR DISPATCHING JOBS TO A PLURALITY OF HARDWARE ACCELERATION ENGINES - A computer system having a plurality of processing resources, including a sub-system for scheduling and dispatching processing jobs to a plurality of hardware accelerators, the subsystem further comprising a job requestor, for requesting jobs having bounded and varying latencies to be executed on the hardware accelerators; a queue controller to manage processing job requests directed to a plurality of hardware accelerators; and multiple hardware queues for dispatching jobs to the plurality of hardware acceleration engines, each queue having a dedicated head of queue entry, dynamically sharing a pool of queue entries, having configurable queue depth limits, and means for removing one or more jobs across all queues. | 12-10-2015 |
20150355951 | OPTIMIZING EXECUTION AND RESOURCE USAGE IN LARGE SCALE COMPUTING - A method for tuning workflow settings in a distributed computing workflow comprising sequential interdependent jobs includes pairing a terminal stage of a first job and a leading stage of a second, sequential job to form an optimization pair, in which data segments output by the terminal stage of the first job comprises data input for the leading stage of the second job. The performance of the optimization pair is tuned by determining, with a computational processor, an estimated minimum execution time for the optimization pair and increasing the minimum execution time to generate an increased execution time. The method further includes calculating a minimum number of data segments that still permit execution of the optimization pair within the increased execution time. | 12-10-2015 |
20150355992 | APPLICATION PERFORMANCE PERCEPTION METER - Embodiments of the present invention provide a method, system and computer program product for application performance perception metering. In an embodiment of the invention, an application performance perception metering method includes initially monitoring resource performance in a computing device during utilization of a computer program through the computing device. Thereafter, the monitored resource performance is compared with historical resource performance during past utilization of the computer program through the computing device. Finally, a prompt can be displayed in the computing device responsive to a determination that the monitored resource performance is deficient relative to the historical resource performance. However, a prompt also can be displayed in the computing device indicating that the computer program is performing poorly based upon a determination that the monitored resource consumption is comparable to the historical resource consumption. | 12-10-2015 |
20150363232 | METHODS AND SYSTEMS FOR CALCULATING STATISTICAL QUANTITIES IN A COMPUTING ENVIRONMENT - This disclosure is directed to methods and systems for calculating statistical quantities of computational resources used by distributed data sources in a computing environment. In one aspect, a master node receives a query regarding use of computational resources used by distributed data sources of a computing environment. The data sources generate metric data that represents use of the computational resources and distribute the metric data to two or more worker nodes. The master node directs each worker node to generate worker-node data that represents the metric data received by each of the worker nodes and each worker node sends worker-node data to the master node. The master node receives the worker-node data and calculates a master-data structure based on the worker-node data, which may be used to estimate percentiles of the metric data in response to the query. | 12-17-2015 |
20150363233 | LEDGER-BASED RESOURCE TRACKING - Disclosed are systems, methods, and non-transitory computer-readable storage media for tracking and managing resource usage through a ledger feature that can trigger complex real-time reactions. The resource tracking can be managed through a ledger module and a ledger data structure. The ledger data structure can be updated each time a task requests a resource. Additionally, as part of the update, the ledger module can verify whether a resource has been over consumed. In response to the detection of an over consumption, the ledger module can set a flag. At some later pointer when the thread is in a stable, well-understood point, the ledger module can check if the flag has been set. If the flag has been set, the ledger module can call the appropriate callback function, which can react to the over consumption in a resource specific manner. | 12-17-2015 |
20150363234 | RESOURCE ALLOCATION FOR MIGRATION WITHIN A MULTI-TIERED SYSTEM - A method and system for intelligent tiering is provided. The method includes receiving a request for enabling a tiering process with respect to data. The computer processor retrieves a migration list indicating migration engines associated with the data. Additionally, an entity list of migration entities is retrieved and each migration entity is compared to associated policy conditions. In response, it is determined if matches exist between the migration entities and the associated policy conditions and a consolidated entity list is generated. | 12-17-2015 |
20150363235 | AUTOMATING APPLICATION PROVISIONING FOR HETEROGENEOUS DATACENTER ENVIRONMENTS - Disclosed is a method of managing computer resources in a dynamic computing environment. The method includes identifying available resources from an available pool based on an augmented model, the available pool including resources unallocated resources, allocating the identified available resources in accordance with the augmented model, identifying reserve resources from a reserve pool based on the augmented model, the reserve pool including resources not allocated and not configured, and upon determining the available pool includes a number of resources below a threshold, replenishing the available pool with the identified reserve resources. | 12-17-2015 |
20150363237 | MANAGING RESOURCE CONSUMPTION IN A COMPUTING SYSTEM - Embodiments relate to managing resource consumption in a computing system. An aspect includes providing a resource policy by defining a plurality of threshold values relating to the resource consumption, wherein the resources are consumed by a plurality of user-defined functions performing tasks for a database management system, wherein the user-defined functions are executed by a plurality of processes external to the database management system. Another aspect includes performing an action, as defined by the resource policy, on at least one of the user-defined functions. | 12-17-2015 |
20150363239 | DYNAMIC TASK SCHEDULING METHOD FOR DISPATCHING SUB-TASKS TO COMPUTING DEVICES OF HETEROGENEOUS COMPUTING SYSTEM AND RELATED COMPUTER READABLE MEDIUM - One dynamic task scheduling method includes: receiving a task, wherein the task comprises a kernel and a plurality of data items to be processed by the kernel; dynamically partitioning the task into a plurality of sub-tasks, each having the kernel and a variable-sized portion of the data items; and dispatching the sub-tasks to a plurality of computing devices of a heterogeneous computing system. Another dynamic task scheduling method includes: receiving a task, wherein the task comprises a kernel and a plurality of data items to be processed by the kernel; partitioning the task into a plurality of sub-tasks, each having the kernel and a same fixed-sized portion of the data items; and dynamically dispatching the sub-tasks to a plurality of computing devices of a heterogeneous computing system. | 12-17-2015 |
20150363241 | METHOD AND APPARATUS TO MIGRATE STACKS FOR THREAD EXECUTION - A method and an apparatus that generate a request from a first thread of a process using a first stack for a second thread of the process to execute a code are described. Based on the request, the second thread executes the code using the first stack. Subsequent to the execution of the code, the first thread receives a return of the request using the first stack. | 12-17-2015 |
20150370603 | DYNAMIC PARALLEL DISTRIBUTED JOB CONFIGURATION IN A SHARED-RESOURCE ENVIRONMENT - Dynamically adjusting the parameters of a parallel, distributed job in response to changes to the status of the job cluster. Includes beginning execution of a job in a cluster, receiving cluster status information, determining a job performance impact of the cluster status, reconfiguring job parameters based on the performance impact, and continuing execution of the job using the updated configuration. Dynamically requesting a change to the resources of the job cluster for a parallel, distributed job in response to changes in job status. Includes beginning execution of a job in a cluster, receiving job status information, determining a job performance impact, requesting a changed allocation of cluster resources based on the determined job performance impact, reconfiguring one or more job parameters based on the changed allocation, and continuing execution of the job using the updated configuration. | 12-24-2015 |
20150370604 | INFORMATION PROCESSING DEVICE AND METHOD - An information processing device comprising a processor that selects, from among a plurality of data processing sections that subject data blocks to a predetermined process, a data processing section to which a first data block group with first identification information based on the data blocks is allocated, and divides, when a workload placed on the data processing section exceeds a first threshold, the first data block group allocated to the data processing section into a plurality of second data block groups with second identification information based on the data blocks, and selects, from among the plurality of data processing sections, data processing sections to which the plurality of second data block groups are allocated. | 12-24-2015 |
20150370605 | Resource Sharing Using Process Delay - Methods and systems that reduce the number of instance of a shared resource needed for a processor to perform an operation and/or execute a process without impacting function are provided. a method of processing in a processor is provided. Aspects include determining that an operation to be performed by the processor will require the use of a shared resource. A command can be issued to cause a second operation to not use the shared resources N cycles later. The shared resource can then be used for a first aspect of the operation at cycle X and then used for a second aspect of the operation at cycle X+N. The second operation may be rescheduled according to embodiments. | 12-24-2015 |
20150370608 | SYSTEM AND METHOD FOR PARTITION TEMPLATES IN A MULTITENANT APPLICATION SERVER ENVIRONMENT - In accordance with an embodiment, described herein is a system and method for supporting the use of partition templates in a multitenant application server environment. A partition template, including a partition configurator and/or attributes, can be used to configure partitions deployed to a domain using that partition template. When a request is received to create a new partition, a selected partition template is determined. The partition configurator of that partition template is then used to configure and deploy the partition to the domain at a corresponding virtual target, which in turn is associated with a target system (e.g., a computer server, or a cluster). A plurality of partition templates can be provided, wherein each partition template can include its own partition configurator and/or attributes that can be used to configure partitions deployed to the domain using that partition template, including different configuration attributes for each partition template. | 12-24-2015 |
20150378786 | PHYSICAL RESOURCE ALLOCATION - Allocation of physical resources is achieved by accessing consumption data for each of a plurality of application components executing in one or more virtual machines and consuming a plurality of allocated physical resources. The consumption data is indicative of consumption levels by each of the plurality of application components of each of the plurality of physical resources. Following a determination that a value for a performance metric associated with the application has crossed an associated threshold value, the consumption data is analyzed to identify a consumption level of a first of the plurality of physical resources being consumed by a first of the plurality of application components has deviated from a historical trend for that physical resource. An instruction is then communicated that when executed will cause a change in an allocation level of the first of the plurality of physical resources. | 12-31-2015 |
20150378787 | PROCESSING WORKLOADS IN SINGLE-THREADED ENVIRONMENTS - A computer implemented method for assigning workload slices from a workload to upcoming frames to be processed during the rendering of the upcoming frames. The processing time of upcoming frames and workload slices varies at runtime according to system resources The method determines an effective frame rate that estimates the duration of an upcoming frame and also determines an effective slice rate that estimates the time it takes to complete an upcoming workload slice. Based on the effective frame rate and the effective slice rate, the method then calculates the slice-to-frame ratio which defines the rate in which slices are assigned to upcoming frames. The slice-to-frame ratio can dynamically change to accommodate for changes to the processing time of rendered frames and completed workload slices. | 12-31-2015 |
20150378789 | SYSTEM AND METHOD FOR PROVIDING ADVANCED RESERVATIONS IN A COMPUTE ENVIRONMENT - A system and method are disclosed for dynamically reserving resources within a cluster environment. The method embodiment of the invention comprises receiving a request for resources in the cluster environment, monitoring events after receiving the request for resources and based on the monitored events, dynamically modifying at least one of the request for resources and the cluster environment. | 12-31-2015 |
20160004564 | METHOD FOR TASK SCHEDULING AND ELECTRONIC DEVICE USING THE SAME - A method for task scheduling and an electronic device using the same are provided. The method for scheduling tasks in an electronic device includes assigning a task to one of first processing units functionally connected to the electronic device, measuring a task load of the task, and controlling migration of the task to one of second processing units functionally connected to the electronic device based on the task load. | 01-07-2016 |
20160004567 | SCHEDULING APPLICATIONS IN A CLUSTERED COMPUTER SYSTEM - Disclosed is a method for scheduling applications for a clustered computer system having a plurality of computers and at least one resource, the clustered computer system executing one or more applications. A method includes: monitoring hardware counters in at least one of the resources and the plurality of computers of the clustered computer system for each of the applications; responsive to said monitoring, determining the utilization of at least one of the resources and the plurality of computers of the clustered computer system by each of the applications; for each of the applications, storing said utilization of at least one of the resource and plurality of computers of the clustered computer system; and upon receiving a request to schedule an application on one of said computers, scheduling a computer to execute the application based on stored utilization for the application and stored utilizations of other applications executing on the computers. | 01-07-2016 |
20160004568 | DATA PROCESSING SYSTEM AND METHOD - A method of optimizing an application in a system having a plurality of processors, the method comprising: analyzing the application for a first period to obtain a first activity analysis; selecting one of the processors based on the activity analysis for running the application; and binding the application to the selected processor. | 01-07-2016 |
20160004569 | METHOD FOR ASSIGNING PRIORITY TO MULTIPROCESSOR TASKS AND ELECTRONIC DEVICE SUPPORTING THE SAME - A method for determining task priorities in an electronic device is provided. The method includes receiving, at the electronic device, a request to perform a task, identifying a threshold parameter and a weighted value in accordance with a type of the requested task, measuring the threshold parameter of the task based on the identified weighted value, and assigning the requested task to one of a first operational unit and a second operational unit based on the measured threshold parameter and weighted value. | 01-07-2016 |
20160004570 | PARALLELIZATION METHOD AND ELECTRONIC DEVICE - A parallelization method includes: obtaining profiling information for each job step of a job by performing profiling of the job to be executed on an electronic device; determining at least one job step to be parallelized on a central processing unit (CPU) and at least one heterogeneous unit of the electronic device among a plurality of job steps of the job based on the profiling information; determining a unit to process each unit data among the CPU and the heterogeneous unit based on the profiling information, with respect to the determined at least one job step; and determining a unit to process each task among the CPU and the heterogeneous unit based on the profiling information, with respect to at least one job step including a plurality of separately executable tasks in the determined at least one job step. | 01-07-2016 |
20160004574 | METHOD AND APPARATUS FOR ACCELERATING SYSTEM RUNNING - The invention discloses a method and apparatus for accelerating. It comprises a method and apparatus for accelerating. The method comprises: an acceleration enabling step of constructing and displaying an acceleration panel containing a one-key acceleration control when a preset enabling condition is triggered; and an acceleration execution step of detecting the one-key acceleration control within the acceleration panel in real time, and swapping memory occupied by all currently running processes to virtual memory to assist the system in running acceleration when the one-key acceleration control is triggered. The method and the apparatus of the invention can organize the system running condition for a user at a fastest speed, free redundant resources, increase the real-time system running speed of the user, and well solve the problem in the prior art that the system running speed can not be increased effectively. | 01-07-2016 |
20160011906 | COMPUTING SESSION WORKLOAD SCHEDULING AND MANAGEMENT OF PARENT-CHILD TASKS | 01-14-2016 |
20160011907 | Configurable Per-Task State Counters For Processing Cores In Multi-Tasking Processing Systems | 01-14-2016 |
20160011908 | TASK ALLOCATION IN A COMPUTING ENVIRONMENT | 01-14-2016 |
20160011909 | PROCESSING CONTROL SYSTEM, PROCESSING CONTROL METHOD, AND PROCESSING CONTROL PROGRAM | 01-14-2016 |
20160011910 | Method and Device for Executing a Function Between a Plurality of Electronic Devices | 01-14-2016 |
20160011911 | MANAGING PARALLEL PROCESSES FOR APPLICATION-LEVEL PARTITIONS | 01-14-2016 |
20160019094 | SYSTEM AND METHOD FOR ELECTRONIC WORK PREDICTION AND DYNAMICALLY ADJUSTING SERVER RESOURCES - A computer-implemented system and method facilitate dynamically allocating server resources. The system and method include determining a current queue distribution, referencing historical information associated with execution of at least one task, and predicting, based on the current queue distribution and the historical information, a total number of tasks of various task types that are to be executed during the time period in the future. Based on this prediction, a resource manager determines a number of servers that should be instantiated for use during the time period in the future. | 01-21-2016 |
20160019095 | ASSIGNING A PORTION OF PHYSICAL COMPUTING RESOURCES TO A LOGICAL PARTITION - A data processing system includes physical computing resources that include a plurality of processors. The plurality of processors include a first processor having a first processor type and a second processor having a second processor type that is different than the first processor type. The data processing system also includes a resource manager to assign portions of the physical computing resources to be used when executing logical partitions. The resource manager is configured to assign a first portion of the physical computing resources to a logical partition, to determine characteristics of the logical partition, the characteristics including a memory footprint characteristic, to assign a second portion of the physical computing resources based on the characteristics of the logical partition, and to dispatch the logical partition to execute using the second portion of the physical computing resources. | 01-21-2016 |
20160026499 | SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR ADAPTIVE SELF-ORGANIZING SERVICE FOR ONLINE TASKS - Provided are systems, methods and computer program products. Embodiments may include methods that include receiving a query that includes multiple requests, each including target data and corresponding to different respective attributes of the query, and selectively and iteratively executing a portion of multiple elemental computer programs responsive to different ones of the requests. Ones of the elemental computer programs are configured to be executed to provide a portion of target values corresponding to respective ones of the requests. More than one of the elemental computer programs are executed to provide, in aggregate, target values corresponding to the target data. | 01-28-2016 |
20160026500 | System and Method of Providing System Jobs Within a Compute Environment - The disclosure relates to systems, methods and computer-readable media for using system jobs for performing actions outside the constraints of batch compute jobs submitted to a compute environment such as a cluster or a grid. The method for modifying a compute environment from a system job disclosure associating a system job to a queuable object, triggering the system job based on an event and performing arbitrary actions on resources outside of compute nodes in the compute environment. The queuable objects include objects such as batch compute jobs or job reservations. The events that trigger the system job may be time driven, such as ten minutes prior to completion of the batch compute job, or dependent on other actions associated with other system jobs. The system jobs may be utilized also to perform rolling maintenance on a node by node basis. | 01-28-2016 |
20160026506 | System and method for managing excessive distribution of memory - Disclosed are a system and a method for managing excessive distribution of memory. Based on a page sharing technology, the software types of virtual machines running in respective servers in a cluster are collected, and the virtual machines with similar running software types are migrated from a server to a specified server, so that the page sharing effect of the virtual machines and the excessive distribution effect of memory are better, the bearing capability of the servers in the system is ensured not to be wasted, the utilization rates of memory and resources of the whole system are combined optimally, and the memory of the whole cluster system is distributed better; moreover, a fewer number of servers run, the energy and the running cost are saved, less pressure is caused to the environment, the emission of carbon dioxide is reduced, therefore, the disclosure has a great social effect and economic effect. | 01-28-2016 |
20160026507 | POWER AWARE TASK SCHEDULING ON MULTI-PROCESSOR SYSTEMS - Methods and apparatus for power-based scheduling of tasks among processors are disclosed. A method may include executing processor executable code on one or more of the processors to prompt a plurality of executable tasks for scheduling among the processors. Processor-demand information is obtained about the plurality of executable tasks in addition to capacity information for each of the processors. Processor power information for each of the processors is also obtained, and the plurality of executable tasks are scheduled on the lowest power processors where processor-demands of the tasks are satisfied. | 01-28-2016 |
20160034306 | METHOD AND SYSTEM FOR A GRAPH BASED VIDEO STREAMING PLATFORM - A method implemented in an electronic device serving as an orchestrator managing video and audio stream processing of a streaming platform system is disclosed. The method includes the electronic device receiving a request to process a video source and creating a task graph based on the request, where the task graph is a directed acyclic graph of tasks for processing the video source, where each node of the task graph represents a processing task, and where each edge of the task graph represents a data flow across two processing tasks and corresponding input and output of each processing task. The method also includes the electronic device estimating resource requirements of each processing tasks, and splitting the task graph into a plurality of subsets, wherein each subset corresponds to a task group to be executed by one or more workers of a plurality of processing units of the streaming platform system. | 02-04-2016 |
20160034307 | MODIFYING A FLOW OF OPERATIONS TO BE EXECUTED IN A PLURALITY OF EXECUTION ENVIRONMENTS - A flow of operations is to be executed in a plurality of execution environments according to a distribution. In response to determining that the distribution is unable to achieve at least one criterion, the distribution is modified according to at least one policy that specifies at least one action to apply to the flow of operations in response to a corresponding at least one condition relating to a characteristic of the flow of operations. | 02-04-2016 |
20160034308 | BACKGROUND TASK RESOURCE CONTROL - Among other things, one or more techniques and/or systems are provided for controlling resource access for background tasks. For example, a background task created by an application may utilize a resource (e.g., CPU cycles, bandwidth usage, etc.) by consuming resource allotment units from an application resource pool. Once the application resource pool is exhausted, the background task is generally restricted from utilizing the resource. However, the background task may also utilize global resource allotment units from a global resource pool shared by a plurality of applications to access the resource. Once the global resource pool is exhausted, unless the background task is a guaranteed background task which can consume resources regardless of resource allotment states of resource pools, the background task may be restricted from utilizing the resource until global resource allotment units within the global resource pool and/or resource allotment units within the application resource pool are replenished. | 02-04-2016 |
20160034309 | SYSTEM AND METHOD FOR CONTEXT-AWARE ADAPTIVE COMPUTING - The present disclosure relates to systems and methods for context-aware adaptive computing. In one embodiment, the present disclosure includes a method comprising receiving a request at a first information handling system (IHS) to perform an application computation. The method also includes determining a user's context, the user operating the first IHS, and ascertaining a battery state of the first IHS. The method further includes allocating the application computation between the first IHS and a second IHS based at least on the user's context and the battery state of the first IHS. The present disclosure also includes associated systems and apparatuses. | 02-04-2016 |
20160034310 | JOB ASSIGNMENT IN A MULTI-CORE PROCESSOR - Technologies are generally described for methods and systems effective to assign a job to be executed in a multi-core processor that includes a first set of cores with a first size and a second set of cores with a second size different from the first size. The multi-core processor may receive the job at an arrival time and may determine a job arrival rate based on the arrival time. The job arrival rate may indicate a frequency that the multi-core processor receives a plurality of jobs. The multi-core processor may select the first set of cores and may select a degree of parallelism based on the job arrival rate and based on a performance metric relating to execution of the job on the first set of cores. In response to the selection, the multi-core processor may assign the job to be executed on the first set of cores. | 02-04-2016 |
20160034312 | EMPIRICAL DETERMINATION OF ADAPTER AFFINITY IN HIGH PERFORMANCE COMPUTING (HPC) ENVIRONMENT - A method, apparatus and program product utilize an empirical approach to determine the locations of one or more IO adapters in an HPC environment. Performance tests may be run using a plurality of candidate mappings that map IO adapters to various locations in the HPC environment, and based upon the results of such testing, speculative adapter affinity information may be generated that assigns one or more IO adapters to one or more locations to optimize adapter affinity performance for subsequently-executed tasks. | 02-04-2016 |
20160034313 | EMPIRICAL DETERMINATION OF ADAPTER AFFINITY IN HIGH PERFORMANCE COMPUTING (HPC) ENVIRONMENT - A method, apparatus and program product utilize an empirical approach to determine the locations of one or more IO adapters in an HPC environment. Performance tests may be run using a plurality of candidate mappings that map IO adapters to various locations in the HPC environment, and based upon the results of such testing, speculative adapter affinity information may be generated that assigns one or more IO adapters to one or more locations to optimize adapter affinity performance for subsequently-executed tasks. | 02-04-2016 |
20160034316 | TIME-VARIANT SCHEDULING OF AFFINITY GROUPS ON A MULTI-CORE PROCESSOR - Methods and systems for scheduling applications on a multi-core processor are disclosed, which may be based on association of processor cores, application execution environments, and authorizations that permits efficient and practical means to utilize the simultaneous execution capabilities provided by multi-core processors. The algorithm may support definition and scheduling of variable associations between cores and applications (i.e., multiple associations can be defined so that the cores an application is scheduled on can vary over time as well as what other applications are also assigned to the same cores as part of an association). The algorithm may include specification and control of scheduling activities, permitting preservation of some execution capabilities of a multi-core processor for future growth, and permitting further evaluation of application requirements against the allocated execution capabilities. | 02-04-2016 |
20160041846 | PROVIDING CONFIGURABLE WORKFLOW CAPABILITIES - Techniques are described for providing clients with access to functionality for creating, configuring and executing defined workflows that manipulate source data in defined manners, such as under the control of a configurable workflow service that is available to multiple remote clients over one or more public networks. A defined workflow for a client may, for example, include multiple interconnected workflow components that are specified by the client and that each are configured to perform one or more types of data manipulation operations on a specified type of input data. The configurable workflow service may further execute the defined workflow at one or more times and in one or more manners, such as in some situations by provisioning multiple computing nodes provided by the configurable workflow service to each implement at least one of the workflow components for the defined workflow. | 02-11-2016 |
20160041848 | Methods and Apparatuses for Determining a Leak of Resource and Predicting Usage Condition of Resource - A method and an apparatus for determining a leak of a program running resource are disclosed that relate to the field of computer applications. The method for predicting a usage condition of a program running resource includes collecting program running resource usage at least once within each program running resource usage period; decomposing the collected program running resource usage into different resource components; for data contained in each resource component, determining a prediction function for the resource component; determining an overall prediction function for a program running resource according to the determined prediction functions for all the resource components; and predicting a usage condition of the program running resource based on the determined overall prediction function. | 02-11-2016 |
20160041849 | SYSTEM, METHOD AND PRODUCT FOR TASK ALLOCATION - A method comprising calculating for each agent, an average quality of tasks that were completed in the past by the agent; allocating tasks to the agents, wherein said allocating comprises selecting an agent to perform a task, the selection is based on the average quality of the agent; in response to the agent completing the task, computing a reward for the agent, wherein the reward is calculated according to a total contribution of the agent to the system by completing the task; whereby biasing said allocating to prefer allocating tasks to a first agent over a second agent, if a quality of the first agent is greater than a quality of the second agent, wherein said biasing is not dependent on prior knowledge of the qualities. Optionally, the agents choose whether or not to perform a task and an agent's quality affects the contributions of the agent performing tasks. | 02-11-2016 |
20160048413 | PARALLEL COMPUTER SYSTEM, MANAGEMENT APPARATUS, AND CONTROL METHOD FOR PARALLEL COMPUTER SYSTEM - The parallel computer system includes a plurality of information processing apparatuses and a management apparatus to control the information processing apparatuses. Each of the plurality of information processing apparatuses outputs a resource usage quantity variation with respect to a job at a predetermined time interval. The management apparatus generates an execution history containing an attribute of the job and the resource usage quantity variation every time the job is executed, estimates a resource usage quantity of a new job, based on resource usage quantity variations contained in an execution history of a reference job matching the new job in terms of the attribute within a predetermined degree, and specifies the information processing apparatus to be assigned the new job, based on the estimated resource usage quantity. | 02-18-2016 |
20160048414 | DYNAMICALLY SPLITTING JOBS ACROSS MULTIPLE AGNOSTIC PROCESSORS IN WIRELESS SYSTEM - Dynamically splitting a job in wireless system between a processor other remote devices may involve evaluating a job that a wireless mobile communication (WMC) device may be requested to perform. The job may be made of one or more tasks. The WMC device may evaluate by determining the availability of at least one local hardware resource of the wireless mobile communication device in processing the requested job. The WMC device may apportion one or more tasks making up the requested job between the wireless mobile communication device and a remote device. The apportioning may be based on the availability of the at least one local hardware resource. | 02-18-2016 |
20160048415 | Systems and Methods for Auto-Scaling a Big Data System - Systems and methods for automatically scaling a big data system are disclosed. Methods may include: determining, at a first time, a first optimal number of nodes for a cluster to adequately process a request; assigning an amount of nodes equal to the first optimal number; determining a rate of progress of the request; determining, at a second time based on the rate of progress a second optimal number of nodes; and modifying the number of nodes assigned to the cluster to equal the second optimal number. Systems may include: a cluster manager, to add and/or remove nodes; a big data system, to process requests that utilize the cluster and nodes, and an automatic scaling cluster manager, including: a big data interface, for communicating with the big data system; a cluster manager interface, for communicating with a cluster manager instructions for adding and/or removing nodes from a cluster used to process a request; and a cluster state machine. | 02-18-2016 |
20160055034 | STREAM PROCESSING USING A CLIENT-SERVER ARCHITECTURE - Responsive to a client request, a processing thread for handling the client request is assigned. Responsive to the client request, a server request is sent to a stream server configured to interact with a plurality of stream processing nodes. The processing thread is maintained in an idle state pending a write response message from the stream server. The processing thread is returned to an active state responsive to receiving the write response message including a stream processing result from the stream server. A client response including the stream processing result is sent to the client | 02-25-2016 |
20160055037 | ANALYSIS CONTROLLER, ANALYSIS CONTROL METHOD AND COMPUTER-READABLE MEDIUM - An analysis controller, method and computer readable medium for determining an allocation pattern representing an allocation of at least one analysis unit, among a plurality of analysis units, to one or more processing devices, among a plurality of processing devices, on a basis of an estimated load for each of the at least one analysis unit for each of one or more time spans, allowable delay time for each of the at least one analysis unit, and a processing capacity of each of the plurality of processing devices. | 02-25-2016 |
20160062795 | MULTI-LAYER QOS MANAGEMENT IN A DISTRIBUTED COMPUTING ENVIRONMENT - A technique for multi-layer quality of service (QoS) management in a distributed computing environment includes: receiving a workload to run in a distributed computing environment; identifying a workload quality of service (QoS) class for the workload; translating the workload QoS class to a storage level QoS class; scheduling the workload to run on a compute node of the environment; communicating the storage level QoS class to a workload execution manager of the compute node; communicating the storage level QoS class to one or more storage managers of the environment, the storage managers managing storage resources in the environment; and extending, by the storage managers, the storage level QoS class to the storage resources to support the workload QoS class. | 03-03-2016 |
20160062798 | SYSTEM-ON-CHIP INCLUDING MULTI-CORE PROCESSOR AND THREAD SCHEDULING METHOD THEREOF - A scheduling method of a system-on-chip including a multi-core processor includes detecting a scheduling request of a thread to be executed in the multi-core processor, and detecting a calling thread having the same context as the scheduling-requested thread among threads that are being executed in the multi-core processor. The method includes reassigning or resetting the scheduling-requested thread according to performance of a core to execute the calling thread having the same context. | 03-03-2016 |
20160062800 | CONTROLLING DATA PROCESSING TASKS - Information representative of a graph-based program specification has a plurality of components, each of which corresponds to a task, and directed links between ports of said components. A program corresponding to said graph-based program specification is executed. A first component includes a first data port, a first control port, and a second control port. Said first data port is configured to receive data to be processed by a first task corresponding to said first component, or configured to provide data that was processed by said first task corresponding to said first component. Executing a program corresponding to said graph-based program specification includes: receiving said first control information at said first control port, in response to receiving said first control information, determining whether or not to invoke said first task, and after receiving said first control information, providing said second control information from said second control port. | 03-03-2016 |
20160062801 | IMAGE FORMING APPARATUS AND RESOURCE MANAGEMENT METHOD - The upper limit value of a resource amount is set for a group introduced as a fragment bundle for a resource service. At the time of introducing a group, the amount of a resource used by an application belonging to the group can be transferred to management for each group. | 03-03-2016 |
20160062803 | SELECTING A RESOURCE FROM A SET OF RESOURCES FOR PERFORMING AN OPERATION - The described embodiments comprise a selection mechanism that selects a resource from a set of resources in a computing device for performing an operation. In some embodiments, the selection mechanism performs a lookup in a table selected from a set of tables to identify a resource from the set of resources. When the resource is not available for performing the operation and until another resource is selected for performing the operation, the selection mechanism identifies a next resource in the table and selects the next resource for performing the operation when the next resource is available for performing the operation. | 03-03-2016 |
20160062804 | MANAGING STATE FOR CONTROLLING TASKS - Information representative of a graph-based program specification has components, and directed links between ports of said components, defining a dependency between said components. A directed link exists between a port of a first component and a port of a second component. The first component specifies first-component execution code that when compiled enables execution of a first task. The second component specifies second-component execution code that when compiled enables execution of a second task. Compiling the graph-based program specification includes grafting first control code to said first-component execution code, which changes a state of said second component to a pending state, an active state, or a suppressed state. Based on said state, said first control code causes at least one of: invoking said second component if said state changes from pending to active, or suppressing said second component if said state changes from pending to suppressed. | 03-03-2016 |
20160070596 | Workflow Execution System and Method for Cloud Environment - This application relates to a workflow execution system and method for processing and executing at least one node within a cloud environment. A process identity (ID) with respect to request message can be obtained to thereby identify a process definition from a deployment table of the workflow execution module. An instance with respect to the current process can be created for execution of the node of a workflow. An outgoing sequence flow with respect to the executing node can be obtained to thereby identify a target node identify (ID) with respect to the outgoing sequence flow of the node. A definition with respect to the executing node can be extracted from the process definition using the target node identify (ID) to thereby effectively execute the workflow within a cloud environment. | 03-10-2016 |
20160070598 | Transparent Non-Uniform Memory Access (NUMA) Awareness - A computing device having a non-uniform memory access (NUMA) architecture implements a method to attach a resource to an application instance that is unaware of a NUMA topology of the computing device. The method includes publishing the NUMA topology of the computing device, where the published NUMA topology indicates for one or more resources of the computing device, a NUMA socket associated with each of the one or more resources of the computing device. The method further includes grouping one or more resources that have a same attribute into a resource pool, receiving a request from the application instance for a resource from the resource pool, determining a central processing unit (CPU) assigned to execute the application instance, where the CPU is associated with a NUMA socket, choosing a resource from the resource pool that is associated with a NUMA socket that is closest to the NUMA socket associated with the CPU assigned to execute the application instance, and attaching the chosen resource to the application instance. | 03-10-2016 |
20160070600 | METHOD OF EXECUTION OF TASKS IN A CRITICAL REAL-TIME SYSTEM - Method for executing a task composed of a set of sequential and alternative processes. The method includes the steps of: a) assigning to each process a hardware resource need and time constraint; b) allocating to each process a time-slot having a duration corresponding to the time constraint of the process; c) identifying a branch point at which is decided the execution of one or other of two alternative processes; d) allocating to the two alternative processes a common time-slot; e) assigning to the common time-slot a resource need equal to the larger of the resource needs of the two alternative processes; f) iterating from step c) for each branch point; g) organizing the resulting time-slots in an execution template associated with the task; and h) configuring real-time multitasking system to constrain the execution of the task according to the resource needs assigned to the time slots of the execution template. | 03-10-2016 |
20160070603 | TASK ALLOCATION METHOD, TASK ALLOCATION APPARATUS, AND NETWORK-ON-CHIP - A task allocation method, a chip are disclosed. The method includes: determining a number of threads included in a to-be-processed task; determining, in a network-on-chip formed by a multi-core processor, a continuous area formed by routers-on-chip corresponding to multiple continuous idle processor cores whose number is equal to the number of the threads; if the area is a non-rectangular area, determining a rectangular area extended from the area; and if predicted traffic of each router-on-chip that is connected to a non-idle processor core and in the extended rectangular area does not exceed a preset threshold, allocating the multiple threads of the to-be-processed task to the idle processor cores in the area. According to the task allocation method provided in the embodiments of the present invention, problems of large hardware overheads, a low network throughput, low system utilization are avoided. | 03-10-2016 |
20160077876 | METHOD AND APPARATUS FOR OPTIMIZING RUNNING OF BROWSER - The invention discloses a method and apparatus for optimizing the running of a browser. The method comprises: obtaining information of browser processes at the browser side and their first resource occupation information; obtaining information of currently running processes of a computer system where the browser is located and their second resource occupation information through a browser interface; loading and displaying information of at least a part of processes which meet a preset resource occupation optimization setting in the obtained information of the browser processes and information of the currently running processes of the computer system where the browser is located and/or their resource occupation information at the browser side; and according to an optimization instruction triggered by a user, performing process optimization processing to the displayed at least a part of processes. By the invention, resource occupation situations of all processes to be optimized can be presented to a user, thereby facilitating the selection of the process optimization processing by the user, and then the optimization processing is performed to the process selected by the user, to increase the running speed of the browser. | 03-17-2016 |
20160077879 | ADAPTIVE ARCHITECTURE FOR A MOBILE APPLICATION BASED ON RICH APPLICATION, PROCESS, AND RESOURCE CONTEXTS AND DEPLOYED IN RESOURCE CONSTRAINED ENVIRONMENTS - A method for adapting execution of an application on a mobile device may be performed by a mobile device including a processor and a memory. The method may include receiving an application context, a process context, and one other context. The method also includes analyzing at least one of the application context or the process context together with the one other context. The method also includes dynamically adapting execution of the application on the mobile device based on the analysis. Adapting execution of the application may include transferring processing related to the application to a backend server for processing. | 03-17-2016 |
20160077880 | Portfolio Generation Based on a Dynamic Allocation of Resources - Portfolio generation based on a dynamic allocation of resources is disclosed. One example is a system including a data processor, a resource allocator, and a portfolio planner. The data processor accesses resource allocation data including a plurality of projects, a portfolio shaping preference, a plurality of constraints and a plurality of objectives, and activates, based on the portfolio shaping preference, a sub-plurality of the plurality of constraints and a sub-plurality of the plurality of objectives. The resource allocator generates at least one project portfolio based on the sub-plurality of constraints and the sub-plurality of objectives, wherein the at least one project portfolio includes a sub-plurality of the plurality of projects, and is based on a dynamic allocation of the resource allocation data. The portfolio planner schedules the sub-plurality of the plurality of projects, and provides the at least one project portfolio to a computing device via a graphical user interface. | 03-17-2016 |
20160077881 | MANAGING A WORKLOAD IN AN ENVIRONMENT - A system and computer-implemented method for managing a workload in an environment is disclosed. The method may include establishing a shadow workload on a shadow computer environment, wherein the shadow workload is a copy of an original workload. The method may include communicating a shadow input for the shadow workload, wherein the shadow input is a copy of an original input for the original workload. The method may also include collecting an original output from the original workload and a shadow output from the shadow workload. The method may also include determining, by comparing the original output from the original workload with the shadow output from the shadow workload, whether the shadow computer environment is configured to operate the original workload. | 03-17-2016 |
20160077882 | SCHEDULING SYSTEM, SCHEDULING METHOD, AND RECORDING MEDIUM - The present invention provides a scheduling system, etc., capable of more efficiently enabling the processing performance possessed by a resource to be exhibited. This scheduling system has a scheduler for reserving a second communication channel as a second communication resource in accordance with a fifth instruction for reserving the second communication channel from a first communication channel, the second communication channel being capable of transmitting/receiving first data between a memory and an accelerator memory, the first data being processed by a task, and the fifth instruction being included in tasks processed by a calculation processing device having such resources as a many-core accelerator, the accelerator memory, a processor, a memory, and the first communication channel, the first communication channel being capable of transmitting/receiving data between the many-core accelerator and the processor. The scheduler also determines a specific resource on the basis of the first data transmitted/received via the second communication channel, in accordance with a first instruction for reserving a resource. | 03-17-2016 |
20160077883 | Efficient Resource Utilization in Data Centers - A method includes identifying high-availability jobs and low-availability jobs that demand usage of resources of a distributed system. The method includes determining a first quota of the resources available to low-availability jobs as a quantity of the resources available during normal operations, and determining a second quota of the resources available to high-availability jobs as a quantity of the resources available during normal operations minus a quantity of the resources lost due to a tolerated event. The method includes executing the jobs on the distributed system and constraining a total usage of the resources by both the high-availability jobs and the low-availability jobs to the quantity of the resources available during normal operations. | 03-17-2016 |
20160077884 | DYNAMIC ALLOCATION AND ASSIGNMENT OF VIRTUAL FUNCTIONS WITHIN FABRIC - Methods and systems for allocating, one or more virtual functions of a plurality of virtual functions associated with physical functions of I/O interface devices of a computing device are described. One method includes managing one or more physical functions of an I/O interface device within an interconnect partition of a multi-partition virtualization system implemented at least in part on the computing device. The method further includes, during a boot process of a second partition on the computing device, parsing a file to determine an assignment of one or more virtual functions to the second partition and associate each of the one or more virtual functions to corresponding physical functions. | 03-17-2016 |
20160077885 | MANAGING RESOURCE COLLISIONS IN A STORAGE COMPUTE DEVICE - A storage compute device includes a data storage section that facilitates persistently storing host data as data objects. The storage compute device also includes two or more compute sections that perform computations on the data objects. A controller monitors resource collisions affecting a first of the compute sections. The controller creates a copy of at least one of the data objects to be processed in parallel at a second of the compute sections in response to the resource collisions | 03-17-2016 |
20160085586 | PMamut: Runtime Flexible Resource Management Framework in scalable Distributed System Based on nature of Request, Demand and Supply and Federalism - A system for management of resources with high scalability in geographical and management, having high variety of resources defined in the system, with high number of existing resources; wherein the system resolves the problems that have dynamic and interactive nature, and considers the concept of environment-system necessary for them; having complex decision-making capability in the field of resource management. Several patterns have been presented in the field of resource management with supporting of mentioned characteristics. | 03-24-2016 |
20160085587 | DATA-AWARE WORKLOAD SCHEDULING AND EXECUTION IN HETEROGENEOUS ENVIRONMENTS - In an approach for scheduling the execution of a workload in a computing environment, a computer receives a request for scheduling execution of a computing job, wherein the computing job includes a plurality of computing tasks to be executed in a sequence, and wherein at least one computing task requires access to a set of data. The computer identifies information related to the computing environment, wherein the information comprises at least processors available to execute each computing task of the plurality of computing tasks and storage device proximity to the processors. The computer determines an execution configuration for the computing job based, at least in part, on the received request, the information related to the computing environment, and current utilization of the processors' resources. The computer schedules execution of the execution configuration for the computing job. | 03-24-2016 |
20160085588 | DISTRIBUTED STORAGE DATA REPAIR AIR VIA PARTIAL DATA REBUILD WITHIN AN EXECUTION PATH - Embodiments are directed towards managing the distribution of tasks in a storage system. An execution path for tasks may be generated based on the type of the task and characteristics of the storage system such that the execution path includes storage computers in a storage system. The tasks may be provided to each storage computer in the execution path. A working set of intermediate results may be generated on the storage computer in the execution path. If there is more than one storage computer in the execution path, working sets may be iteratively communicated to a next storage computer in the execution path such that the next storage computer employs a previously generated working set to generate a next working set until each storage computer in the execution path has been employed to generate a working set. The results may be stored on the storage computers. | 03-24-2016 |
20160085589 | Method And Apparatus For Providing Isolated Virtual Space - Various embodiments provide a method and apparatus of creating an application isolated virtual space without the need to run multiple OSs. Application isolated virtual spaces are created by an Operating System (OS) utilizing a resource manager. The resource manager isolates applications from each other by re-writing the network stack and the I/O subsystem of the conventional OS kernel to have multiple isolated network stack/virtual I/O views of the physical resources managed by the OS. Isolated network stacks and virtual I/O views identify the resources allocated to an application's isolated virtual space and are mapped to applications via an isolating identifier. | 03-24-2016 |
20160085590 | MANAGEMENT APPARATUS AND MANAGEMENT METHOD - A management apparatus comprises a processor configured to execute a program and a storage resource configured to store the program, wherein the processor executes: an identifying process configured to identify an another job having a scheduled execution period overlapping with a scheduled execution period of an estimation subject job among a plurality of jobs executed at a first server from the plurality of jobs; a calculating process configured to calculate an islanding execution time in which the estimation subject job is executed individually at the first server based on the scheduled execution period of the estimation subject job and the scheduled execution period of the another job identified in the identifying process and a creation process configured to create a schedule which correlates the estimation subject job with the islanding execution time calculated in the calculating process. | 03-24-2016 |
20160085592 | DYNAMIC JOB PROCESSING BASED ON ESTIMATED COMPLETION TIME AND SPECIFIED TOLERANCE TIME - The invention provides a system and method for managing clusters of parallel processors for use by groups and individuals requiring supercomputer level computational power. A Beowulf cluster provides supercomputer level processing power. Unlike a traditional Beowulf cluster; however, cluster size in not singular or static. As jobs are received from users/customers, a Resource Management System (RMS) dynamically configures and reconfigures the available nodes in the system into clusters of the appropriate sizes to process the jobs. Depending on the overall size of the system, many users may have simultaneous access to supercomputer level computational processing. Users o are preferably billed based on the time for completion with faster times demanding higher fees. | 03-24-2016 |
20160085596 | MULTI-CPU SYSTEM AND MULTI-CPU SYSTEM SCALING METHOD - In an asymmetric multi-CPU system on which a plurality of type of CPUs with different data processing performance and power consumption are mounted in groups for each type, a plurality of forms of combination of the types and numbers of CPUs are defined in such a way that the maximum numbers of the overall data processing and power consumption very by stages. Then, the system performs a control of allocation of the data processing to the CPU identified by the form selected from the definition information according to the data processing environment, in order to reduce unnecessary power consumption according to the data processing environment, such as data processing load, and to easily achieve the required data processing performance. | 03-24-2016 |
20160092270 | ALGORITHM FOR FASTER CONVERGENCE THROUGH AFFINITY OVERRIDE - A method is implemented by a network device having a symmetric multi-processing (SMP) architecture. The method improves response time for processes implementing routing algorithms in a network. The method manages core assignments for the processes during a network convergence process. The method includes determining a number of interrupts or system events processed by a subset of cores of a set of cores of a central processing unit and identifying a core within the subset of cores with a lowest number of interrupts or system events processed. The method further includes changing an affinity mask of at least one process implementing the routing algorithms during the network convergence to target the core within the subset of cores with a lowest number of interrupts or system events processed. | 03-31-2016 |
20160092272 | CONGESTION AVOIDANCE IN NETWORK STORAGE DEVICE USING DYNAMIC WEIGHTS - Methods, systems, and computer programs are presented for allocating CPU cycles and disk Input/Output's (IOs) to resource-creating processes based on dynamic weights that change according to the current percentage of resource utilization in the storage device. One method includes operations for assigning a first weight to a processing task that increases resource utilization of a resource for processing incoming input/output (IO) requests, and for assigning a second weight to a generating task that decreases the resource utilization of the resource. Further, the method includes an operation for dynamically adjusting the second weight based on the current resource utilization in the storage system. Additionally, the method includes an operation for allocating the CPU cycles and disk IOs to the processing task and to the generating task based on their respective first weight and second weight. | 03-31-2016 |
20160092274 | Heterogeneous Thread Scheduling - Heterogeneous thread scheduling techniques are described in which a processing workload is distributed to heterogeneous processing cores of a processing system. The heterogeneous thread scheduling may be implemented based upon a combination of periodic assessments of system-wide power management considerations used to control states of the processing cores and higher frequency thread-by-thread placement decisions that are made in accordance with thread specific policies. In one or more implementations, a system workload context is periodically analyzed for a processing system having heterogeneous cores including power efficient cores and performance oriented cores. Based on the periodic analysis, cores states are set for some of the heterogeneous cores to control activation of the power efficient cores and performance oriented cores for thread scheduling. Then, individual threads are scheduled in dependence upon the core states to allocate the individual threads between active cores of the heterogeneous cores on a per-thread basis. | 03-31-2016 |
20160092276 | INDEPENDENT MAPPING OF THREADS - Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads. | 03-31-2016 |
20160092278 | SYSTEM AND METHOD FOR PROVIDING A PARTITION FILE SYSTEM IN A MULTITENANT APPLICATION SERVER ENVIRONMENT - In accordance with an embodiment, described herein is a system and method for providing a partition file system in a multitenant application server environment. The system enables application server components to work with partition-specific files for a given partition, instead of or in addition to domain-wide counterpart files. The system also allows the location of some or all of a partition-specific storage to be specified by higher levels of the software stack. In accordance with an embodiment, also described herein is a system and method for resource overriding in a multitenant application server environment, which provides a means for administrators to customize, at a resource group level, resources that are defined in a resource group template referenced by a partition, and to override resource definitions for particular partitions. | 03-31-2016 |
20160092279 | DISTRIBUTED REAL-TIME COMPUTING FRAMEWORK USING IN-STORAGE PROCESSING - According to one general aspect, a scheduler computing device may include a computing task memory configured to store at least one computing task. The computing task may be executed by a data node of a distributed computing system, wherein the distributed computing system includes at least one data node, each data node having a central processor and an intelligent storage medium, wherein the intelligent storage medium comprises a controller processor and a memory. The scheduler computing device may include a processor configured to assign the computing task to be executed by either the central processor of a data node or the intelligent storage medium of the data node, based, at least in part, upon an amount of data associated with the computing task. | 03-31-2016 |
20160098292 | JOB SCHEDULING USING EXPECTED SERVER PERFORMANCE INFORMATION - A job scheduler that schedules ready tasks amongst a cluster of servers. Each job might be managed by one scheduler. In that case, there are multiple job schedulers which conduct scheduling for different jobs concurrently. To identify a suitable server for a given task, the job scheduler uses expected server performance information received from multiple servers. For instance, the server performance information might include expected performance parameters for tasks of particular categories if assigned to the server. The job management component then identifies a particular task category for a given task, determines which of the servers can perform the task by a suitable estimated completion time, and then assigns based on the estimated completion time. The job management component also uses cluster-level information in order to determine which server to assign a task to. | 04-07-2016 |
20160098296 | TASK POOLING AND WORK AFFINITY IN DATA PROCESSING - Mechanisms for improving computing system performance by a processor device. System resources are organized into a plurality of groups. Each of the plurality of groups is assigned one of a plurality of predetermined task pools. Each of the predetermined task pools has a plurality of tasks. Each of the plurality of groups corresponds to at least one physical boundary of the system resources such that a speed of an execution of those of the plurality of tasks for a particular one of the plurality of predetermined task pools is optimized by a placement of an association with the at least one physical boundary and the plurality of groups. | 04-07-2016 |
20160098297 | System and Method for Determining Capacity in Computer Environments Using Demand Profiles - A system and method are provided for determining aggregate available capacity for an infrastructure group with existing workloads in computer environment. The method comprises determining one or more workload placements of one or more workload demand entities on one or more capacity entities in the infrastructure group; computing an available capacity and a stranded capacity for each resource for each capacity entity in the infrastructure group, according to the workload placements; and using the available capacity and the stranded capacity for each resource for each capacity entity to determine an aggregate available capacity and a stranded capacity by resource for the infrastructure group. | 04-07-2016 |
20160098298 | METHODS AND APPARATUS FOR INTEGRATED WORK MANAGEMENT - Described herein are techniques for integrated work management. An integrated work management server processes one or more datum of one or more source systems. The datum relates to at least one work item representing at least one assignment to be processed by a resource. An integrator is coupled to the integrated work management server. The integrator uses the one or more datum to create, store and/or update a combined work queue for the resource. The combined work queue comprises any of at least one work item and at least one assignment. One or more prioritization rules specify one or more criteria. The integrator prioritizes the combined work queue by evaluating the criteria in accord with the one or more datum. | 04-07-2016 |
20160098299 | GLOBAL LOCK CONTENTION PREDICTOR - A method for lock acquisition includes adding a current contention state of a lock to a contention history. The lock includes a memory location for storing information used for excluding accessing a resource by one or more threads while another thread accesses the resource. The method includes combining the contention history with a lock address for the lock to form a predictor table index, and using the predictor table index to determine a lock prediction for the lock. The prediction includes a determination of an amount of contention. | 04-07-2016 |
20160103705 | OPERATIONAL-TASK-ORIENTED SYSTEM AND METHOD FOR DYNAMICALLY ADJUSTING OPERATIONAL ENVIRONMENT - The present invention provides an operational-task-oriented system and method for dynamically adjusting operational environment applicable to a computer cluster. Each operational node of the computer cluster has two or more operational systems installed. After receiving the operational task, the control node estimates the time required for completing different tasks requiring different operational systems by appropriate operational nodes and compares the estimated finish time and the assigned finish time for judging how to adjust the operating system running in the operational nodes. Thereby, the operational task can be completed in the assigned finish time. Another method is to use the control node to analyze the proportions of the tasks requiring different operational systems in an operational task and hence adjusts the operational system running in an operational node according to the proportion of requirement. Thereby, the operational task can be completed in the shortest time. | 04-14-2016 |
20160110221 | SCHEDULING SYSTEM, SCHEDULING METHOD, AND RECORDING MEDIUM - Provided are a scheduling system, etc., such that it is possible to efficiently utilize processing performance of a resource. A scheduling system comprises a scheduler which determines specific resources for processing a task to be processed at a computation processing device which includes a many-core accelerator as resources and a processor which controls the resources, said scheduler determining the specific resources according to a first instruction for reserving resources, which is included in the task. | 04-21-2016 |
20160110222 | APPARATUS AND METHOD OF EXECUTING APPLICATION - A method and apparatus for executing an application which provides a graphical user interface (GUI) is provided, the method including, determining a score based on resource usage of a UI element provided by the GUI for user interaction, and allocating resources to the UI element based on the determined score. | 04-21-2016 |
20160110223 | MULTI-THREADED QUEUING SYSTEM FOR PATTERN MATCHING - A multi-threaded processor may support efficient pattern matching techniques. An input data buffer may be provided, which may be shared between a fast path and a slow path. The processor may retire the data units in the input data buffer that is not required and thus avoids copying the data unit used by the slow path. The data management and the execution efficiency may be enhanced as multiple threads may be created to verify potential pattern matches in the input data stream. Also, the threads, which may stall may exit the execution units allowing other threads to run. Further, the problem of state explosion may be avoided by allowing the creation of parallel threads, using the fork instruction, in the slow path. | 04-21-2016 |
20160110224 | GENERATING JOB ALERT - A method and system for generating a job alert. According to embodiments of the present invention, before a target job is processed, a characteristic of input and output of the target job in at least one stage is determined through analyzing a historical job, and a resource overhead associated with the processing of the target job is calculated based on the characteristic of input and output. Then, an alert for the target job is generated in response to the resource overhead exceeding a predetermined threshold. In such manner, an alert for the target job can be proactively generated before the resource overhead problem occurs, so as to enable an administrator or developer to discover a fault in advance and adopt measures actively to avoid loss and damage to the intermediate results or output data when the target job is processed. | 04-21-2016 |
20160110225 | SYSTEM AND METHOD FOR IMPROVING MEMORY USAGE IN VIRTUAL MACHINES - An apparatus includes at least one processor executing a method for managing memory among a plurality of concurrently-running virtual machines, and a non-transitory memory device that stores a set of computer readable instructions for implementing and executing said memory management method. A memory optimization mechanism can reduce a memory usage of a virtual machine at a cost of increasing a central processing unit (CPU) usage. Information on a memory usage and a CPU usage of each virtual machine is periodically collected. When a first virtual machine exhibits high memory use, at least one second virtual machine with an extra CPU capacity is identified. A memory optimization mechanism is applied to the second virtual machine to reduce memory used by the second virtual machine, thereby providing a portion of freed memory that is then allocated to the first virtual machine. | 04-21-2016 |
20160110228 | Service Scheduling Method, Apparatus, and System - A service scheduling method, applied to a stream computing system, is presented. The stream computing system includes a master control node and multiple working nodes, and the master control node is configured to schedule sub-services included in the service to the multiple working nodes for processing. The method includes acquiring a stream computing application graph of the service; dividing the stream computing application graph according to operator degrees and operator potentials of operators in the stream computing application graph and according to a division quantity for dividing the stream computing application graph, to obtain divided sub-graphs with the division quantity; and scheduling a sub-service corresponding to an operator included in each divided sub-graph to a working node corresponding to the divided sub-graph for processing. The method provided in embodiments of the present disclosure can enable services to use physical resources and network resources in a balanced manner. | 04-21-2016 |
20160117193 | RESOURCE MAPPING IN MULTI-THREADED CENTRAL PROCESSOR UNITS - A processor determines that processing of a thread is suspended due to limited availability of a processing resource. The processor supports execution of the plurality of threads in parallel. The processor obtains a lock on a second processing resource that is substitutable as a resource during processing of the first thread. The second processing resource is included as part of a component that is external to the processor. The component supports a number of threads that is less than the plurality of threads. The processing of the thread is suspended until the lock is available. The processor processes the first thread using the second processing resource. The processor includes a shared register to support mapping a portion of the plurality of threads to the component. The portion of the plurality of threads is equal to, at most, the number of threads supported by component. | 04-28-2016 |
20160117194 | METHODS AND APPARATUS FOR RESOURCE MANAGEMENT CLUSTER COMPUTING - Embodiments of an event-driven resource management technique may enable the management of cluster resources at a sub-computer level (e.g., at the thread level) and the decomposition of jobs at an atomic (task) level. A job queue may request a resource for a job from a resource manager, which may locate a resource in a resource list and grant the resource to the job queue. After the resource is granted, the job queue sends the job to the resource, on which the job may be partitioned into tasks and from which additional resources may be requested from the resource manager. The resource manager may locate additional resources in the list and grant the resources to the resource. The resource sends the tasks to the granted resources for execution. As resources complete their tasks, the resource manager is informed so that the status of the resources in the list can be updated. | 04-28-2016 |
20160117196 | LOG ANALYSIS - Log analysis can include transferring compiled log analysis code, executing log analysis code, and performing a log analysis on the executed log analysis code. | 04-28-2016 |
20160117197 | Method, Apparatus, and System for Issuing Partition Balancing Subtask - A method, an apparatus, and a system are provided for issuing a partition balancing subtask, which are applied to a controller. After receiving a second partition balancing task, the controller generates a second partition balancing subtask set, where the second partition balancing subtask set includes at least one partition balancing subtask, and each partition balancing subtask records a migration partition, a node to which the migration partition belongs, and a destination node; searches a current partition balancing subtask set, and deletes a repeated partition balancing subtask between the second partition balancing subtask set and the current partition balancing subtask set; and issues remaining partition balancing subtasks after the repeated partition balancing subtask is deleted to the destination node recorded in each partition balancing subtask. | 04-28-2016 |
20160117199 | COMPUTING SYSTEM WITH THERMAL MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: a monitoring block configured to calculate a present power for each of multiple resource units; a thermal block, coupled to the monitoring block, configured to dynamically calculate a thermal candidate set based on the present power, the thermal candidate set for representing a present thermal load for the multiple resource units; and a target block, coupled to the thermal block, configured to determine a target resource based on the thermal candidate set for performing a target task using the target resource. | 04-28-2016 |
20160124772 | In-Flight Packet Processing - A method for supporting in-flight packet processing is provided. Packet processing devices (microengines) can send a request for packet processing to a packet engine before a packet comes in. The request offers a twofold benefit. First, the microengines add themselves to a work queue to request for processing. Once the packet becomes available, the header portion is automatically provided to the corresponding microengine for packet processing. Only one bus transaction is involved in order for the microengines to start packet processing. Second, the microengines can process packets before the entire packet is written into the memory. This is especially useful for large sized packets because the packets do not have to be written into the memory completely when processed by the microengines. | 05-05-2016 |
20160124773 | METHOD AND SYSTEM THAT MEASURES AND REPORTS COMPUTATIONAL-RESOURCE USAGE IN A DATA CENTER - The present disclosure describes methods and systems that monitor the utilization of computational resources. In one implementation, a system periodically measures the utilization of computational resources, determines an amount of computational-resource wastage, identifies the source of the wastage, and generates recommendations that reduce or eliminate the wastage. In some implementations, recommendations are generated based on a cost of the computational-resource wastage. The cost of computational-resource wastage can be determined from factors that include the cost of providing a computational resource, an amount of available computational resources, and the amount of actual computational-resource usage. Methods of presenting and modeling computational-resource usage and methods that associate an economic cost with resource wastage are presented. | 05-05-2016 |
20160124775 | RESOURCE ALLOCATION CONTROL WITH IMPROVED INTERFACE - A computer system displays a user interface display with a user input mechanism that can be actuated in order to identify a set of resources, and corresponding capacities. A team configuration is stored in memory and reflects the configuration of the resources and corresponding capacities that were identified. A task dependency structure is obtained, and is indicative of an underlying project. Resources from the stored team configuration, and corresponding capacities, are assigned to the tasks in the task dependency structure and the team configuration is updated, in memory, to reflect the assignments. A display is generated that shows the state of the underlying memory, and that is indicative of a remaining capacity and a consumed capacity. | 05-05-2016 |
20160132360 | Stream Schema Resolution and Stream Tuple Processing in a Distributed Stream-Processing System - A task worker running on a worker server receives a process specification over a network. The process specification specifies a task to be executed by the task worker. The executed task includes generating an output data object for an output data stream based in part on an input data object from an input data stream. The process specification is accessed to specify the required fields to be read from for executing the task and to specify the generated the fields in the input data object that will be written to during or subsequent to the executing of the task. The task worker executes the task and generates the output data object. The output data object is then transmitted to the output stream based on the stream configuration. | 05-12-2016 |
20160132361 | SYSTEM AND METHOD FOR TOPOLOGY-AWARE JOB SCHEDULING AND BACKFILLING IN AN HPC ENVIRONMENT - A method for job management in an HPC environment includes determining an unallocated subset from a plurality of HPC nodes, with each of the unallocated HPC nodes comprising an integrated fabric. An HPC job is selected from a job queue and executed using at least a portion of the unallocated subset of nodes. | 05-12-2016 |
20160132362 | AUTOMATIC ADMINISTRATION OF UNIX COMMANDS - Various techniques for automatically administering UNIX commands to target systems are disclosed. One method involves receiving information identifying a UNIX command and additional information identifying one or more target systems. The method then issues N instances of the UNIX command in parallel to the one or more target systems, where N is an integer greater than one. The N instances of the UNIX command are issued automatically, in response to receipt of the information and the additional information. In some situations, issuing the N instances of the UNIX command in parallel involves creating N threads, where each of the N threads is configured to issue a respective one of the N instances of the UNIX command to a respective one of the target systems. | 05-12-2016 |
20160139852 | CONTEXT AWARE DYNAMIC COMPOSITION OF MIGRATION PLANS TO CLOUD - Context aware dynamic composition of migration plans may be provided. A request for application or image migration may be received. Target machines and associated configuration may be identified. Resources and a schedule may be allocated. An appropriate tooling for each migration action may be selected. An artificial intelligence aspect of the migration planning process may continuously replan migration based on monitored changes in the context of source or target environment. | 05-19-2016 |
20160139883 | DISTRIBUTING RESOURCE REQUESTS IN A COMPUTING SYSTEM - In an embodiment, a method include, in a hardware processor, producing, by a block of hardware logic resources, a constrained randomly generated or pseudo-randomly generated number (CRGN) based on a bit mask stored in a register memory. | 05-19-2016 |
20160139958 | ASSIGNING LEVELS OF POOLS OF RESOURCES TO A SUPER PROCESS HAVING SUB-PROCESSES - Provided are a computer program product, system, and method for assigning levels of pools of resources in an operating system to a super process having sub-processes. A plurality of first level pools of resources are reserved in the operating system for first level processes to perform a first level operation and invoke at least one second level process to perform a second level operation. A plurality of second level pools of resources are reserved in the operating system for second level processes. One of the second level pools of resources assigned to one of the second level processes is released and available to assign to another second level process when the second level process completes the second level operation for which it was invoked. | 05-19-2016 |
20160139963 | VIRTUAL COMPUTING POWER MANAGEMENT - As disclosed herein, a method, executed by a computer, includes comparing a current power consumption profile for a computing task with an historical power consumption profile, receiving a request for a computing resource, granting the request if the historical power consumption profile does not suggest a pending peak in the current power consumption profile or the historical power consumption profile indicates persistent consumption at a higher power level, and denying the request for the computing resource if the historical power consumption profile suggests a pending peak in the current power consumption profile and the historical power consumption profile indicates temporary consumption at the higher power level. Denying the request may include initiating an allocation timeout and subsequently ending the allocation timeout in response to a drop in a power consumption below a selected level. A computer system and computer program product corresponding to the method are also disclosed herein. | 05-19-2016 |
20160139964 | Energy Efficient Multi-Cluster System and Its Operations - A multi-cluster system having processor cores of different energy efficiency characteristics is configured to operate with high efficiency such that performance and power requirements can be satisfied. The system includes multiple processor cores in a hierarchy of groups. The hierarchy of groups includes: multiple level-1 groups, each level-1 group including one or more of processor cores having identical energy efficiency characteristics, and each level-1 group configured to be assigned tasks by a level-1 scheduler; one or more level-2 groups, each level-2 group including respective level-1 groups, the processor cores in different level-1 groups of the same level-2 group having different energy efficiency characteristics, and each level-2 group configured to be assigned tasks by a respective level-2 scheduler; and a level-3 group including the one or more level-2 groups and configured to be assigned tasks by a level-3 scheduler. | 05-19-2016 |
20160147569 | DISTRIBUTED TECHNIQUE FOR ALLOCATING LONG-LIVED JOBS AMONG WORKER PROCESSES - A distributed computing system that executes a set of long-lived jobs is described. During operation, each worker process performs the following operations. First, the worker process identifies a set of jobs to be executed and a set of worker processes that can execute the set of jobs. Next, the worker process sorts the set of worker processes based on unique identifiers for the worker processes. Then, the worker process assigns jobs to each worker process in the set of worker processes, wherein approximately the same number of jobs is assigned to each worker process, and jobs are assigned to the worker processes in sorted order. While assigning jobs, the worker process uses an identifier for each worker process to seed a pseudorandom number generator, and then uses the pseudorandom number generator to select jobs for each worker process to execute. | 05-26-2016 |
20160147570 | COMPONENT SERVICES INTEGRATION WITH DYNAMIC CONSTRAINT PROVISIONING - Resource provisioning information links to resource provisioning information of at least one reusable component resource that satisfies at least a portion of user-specified resource development constraints of a new resource under development are identified within a resource provisioning-link registry. Using the identified resource provisioning information links, the resource provisioning information of the at least one reusable component resource is programmatically collected from at least one data provider repository that stores reusable resources and that publishes the resource provisioning information links to the resource provisioning-link registry. The programmatically-collected resource provisioning information of the at least one reusable component resource is analyzed. Based upon the analyzed programmatically-collected resource provisioning information of the at least one reusable component resource, a resource integration recommendation is provided that uses the at least one reusable component resource and that satisfies at least the portion of the user-specified resource development constraints of the new resource under development. | 05-26-2016 |
20160147571 | METHOD FOR OPTIMIZING THE PARALLEL PROCESSING OF DATA ON A HARDWARE PLATFORM - The invention relates to a method for optimizing the parallel processing of data on a hardware platform, the hardware platform comprising at least one computing unit comprising a plurality of processing units able to execute a plurality of executable tasks in parallel, the data to be processed forming a data set that can be broken down into data subsets, a same sequence of operations being performed on each data subset. | 05-26-2016 |
20160147572 | MODIFYING MEMORY SPACE ALLOCATION FOR INACTIVE TASKS - Provided are a computer program product, system, and method for modifying memory space allocation for inactive tasks. Information is maintained on computational resources consumed by tasks running in the computer system allocated memory space in the memory. The information on the computational resources consumed by the tasks is used to determine inactive tasks of the tasks. The allocation of the memory space allocated to at least one of the determined inactive tasks is modified. | 05-26-2016 |
20160147573 | COMPUTING SYSTEM WITH HETEROGENEOUS STORAGE AND PROCESS MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: a monitor block configured to calculate a total access time based on a device access time, a traffic latency, a traffic information, or a combination thereof; a name node block, coupled to the monitor block, configured to determine a data location of a data content; and a scheduler block, coupled to the name node block, configured to distribute a task assignment based on the total access time, the data location, device performance criteria, or a combination thereof for accessing the data content from a target device. | 05-26-2016 |
20160147574 | FACILITATING PROVISIONING IN A MIXED ENVIRONMENT OF LOCALES - Aspects capable of dynamically and flexibly supporting a plurality of locales upon provisioning are provided. An associated management server includes a storage table configured to store a plurality of logical device operations, a plurality of locales, and a plurality of workflows, wherein each resource server among all resource servers connected to the management server is associated with a different one of the plurality of locales. The management server further includes a provisioning circuit configured to dynamically determine, for a required logical device operation, a resource server among all of the resource servers connected to the management server by way of provisioning. The management server further includes a calling circuit configured to search the storage table using a locale among the plurality of locales that is associated with the dynamically determined resource server to select a workflow from the plurality of workflows for the required logical device operation. | 05-26-2016 |
20160147777 | METHODS AND APPARATUS OF USING CUSTOMIZABLE TEMPLATES IN PROCESS CONTROL SYSTEMS - Methods and apparatus of using customizable templates in process control systems are disclosed. An example method includes initializing a first process control device associated with a first protocol based on a template file and a first parameter definition file. The template file includes global variables and associated values. The first parameter definition file defines a relationship between the global variables and first local variables of at least one of the first process control device or the first protocol. The example method also includes initializing a second process control device associated with a second protocol based on the template file and a second parameter definition file. The second parameter definition file defines a relationship between the global variables and second local variables of at least one of the second process control device or the second protocol. The first protocol is different from the second protocol. | 05-26-2016 |
20160154639 | DETECTING DEPLOYMENT CONFLICTS IN HETEROGENEOUS ENVIRONMENTS | 06-02-2016 |
20160154678 | REVERTING TIGHTLY COUPLED THREADS IN AN OVER-SCHEDULED SYSTEM | 06-02-2016 |
20160154679 | METHOD AND APPARATUS FOR DETERMINING A WORK-GROUP SIZE | 06-02-2016 |
20160154680 | CALIBRATED TIMEOUT INTERVAL ON A CONFIGURATION VALUE, SHARED TIMER VALUE, AND SHARED CALIBRATION FACTOR | 06-02-2016 |
20160161982 | CALIBRATED TIMEOUT INTERVAL ON A CONFIGURATION VALUE, SHARED TIMER VALUE, AND SHARED CALIBRATION FACTOR - A processor-implemented method for implementing a shared counter architecture is provided. The method may include receiving, by a worker thread, an application request; recording, by a common timer thread, a shared timer value and acquiring, by the worker thread, the shared timer value. The method may further include recording, by the common timer thread, a shared calibration factor; acquiring, by the worker thread, a configuration value corresponding to the application request and generating, by the worker thread, a calibrated timeout interval for the application request based on the shared calibration factor, the shared timer value, and the configuration value. The method may further include registering, by the worker thread, the calibrated timeout interval for the application request on a current timeout list; determining, by the common timer thread, a timeout occurrence for the application request based on the registered calibrated timeout interval; and releasing resources based on the timeout occurrence. | 06-09-2016 |
20160162336 | CPU SCHEDULER CONFIGURED TO SUPPORT LATENCY SENSITIVE VIRTUAL MACHINES - A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU. | 06-09-2016 |
20160162337 | MULTIPLE CORE REAL-TIME TASK EXECUTION - A real-time task may initially be performed by a first thread that is executing on a first core of a multi-core processor. A second thread may be initiated to take over the performance of the real-time task on a second core of the multi-core processor while the first thread is performing the real-time task. The performance of the real-time tasks is then transferred from the first thread to the second thread with the execution of the second thread on the second core to perform the real-time task. | 06-09-2016 |
20160162340 | POWER EFFICIENT HYBRID SCOREBOARD METHOD - Described herein are technologies related to a method of enforcing thread dependencies using a hybrid scoreboard-based approach. | 06-09-2016 |
20160170780 | ISOLATING APPLICATIONS IN SERVER ENVIRONMENT | 06-16-2016 |
20160170800 | METHOD AND SYSTEM FOR DYNAMIC POOL REALLOCATION | 06-16-2016 |
20160170801 | METHOD AND SYSTEM FOR DYNAMIC POOL REALLOCATION | 06-16-2016 |
20160170803 | ELIMINATING EXECUTION OF JOBS-BASED OPERATIONAL COSTS OF RELATED REPORTS | 06-16-2016 |
20160170804 | ISOLATING APPLICATIONS IN J2EE SERVER ENVIRONMENT | 06-16-2016 |
20160170805 | DYNAMIC ASSOCIATION OF APPLICATION WORKLOAD TIERS TO INFRASTRUCTURE ELEMENTS IN A CLOUD COMPUTING ENVIRONMENT | 06-16-2016 |
20160170806 | SYSTEM AND METHOD OF PROVIDING A SELF-OPTIMIZING RESERVATION IN SPACE OF COMPUTE RESOURCES | 06-16-2016 |
20160170809 | EXECUTING A MULTICOMPONENT SOFTWARE APPLICATION ON A VIRTUALIZED COMPUTER PLATFORM | 06-16-2016 |
20160179484 | Code Generating Method, Compiler, Scheduling Method, Scheduling Apparatus and Scheduling System | 06-23-2016 |
20160179577 | Method of Managing the Operation of an Electronic System with a Guaranteed Lifetime | 06-23-2016 |
20160179579 | EFFICIENT VALIDATION OF RESOURCE ACCESS CONSISTENCY FOR A SET OF VIRTUAL DEVICES | 06-23-2016 |
20160179580 | RESOURCE MANAGEMENT BASED ON A PROCESS IDENTIFIER | 06-23-2016 |
20160179581 | CONTENT-AWARE TASK ASSIGNMENT IN DISTRIBUTED COMPUTING SYSTEMS USING DE-DUPLICATING CACHE | 06-23-2016 |
20160179584 | VIRTUAL SERVICE MIGRATION METHOD FOR ROUTING AND SWITCHING PLATFORM AND SCHEDULER | 06-23-2016 |
20160188367 | METHOD FOR SCHEDULING USER REQUEST IN DISTRIBUTED RESOURCE SYSTEM, AND APPARATUS - According to a method for scheduling a user request in a distributed resource system, an apparatus, and a system that are provided by embodiments of the present invention, in a T | 06-30-2016 |
20160188369 | Computing Resource Inventory System - Systems and methods of managing computing resources of a computing system are described. A computing resource list and computing resource information may be stored at a data store. The computing resource list may identify a set of computing resources of a computing system, and the computing resource information may respectively describe the computing resources. The computing resource list may be updated in response to a new computing resource being added to the computing system or in response to an existing computing resource being removed from the computing system. Evaluation tasks for the computing resources may be performed, and a resource evaluation report may be generated during performance of at least one of the evaluation reports. | 06-30-2016 |
20160188370 | Interface for Orchestration and Analysis of a Computer Environment - A host server is configured to receive information related to metrics and configurations associated with computer resources of a computer infrastructure, derive and resolve the information into capacity, performance, reliability, and efficiency, as related to attributes associated with the computer resources, including compute attributes such as application, virtual machine (VM) attributes, storage attributes, and network attributes. The host server provides the metrics and attributes in a matrix configuration as a graphical user interface (GUI) on an output device, such as a display. The GUI is configured to provide a user with a single point of view into the computer infrastructure by converging application, compute, storage, and network attributes into capacity, performance, reliability, and efficiency concepts. With such a configuration, the GUI allows the end user to readily review the environments for potential issues in a time efficient manner, as well as solutions provided by the GUI. | 06-30-2016 |
20160188371 | APPLICATION PROGRAMMING INTERFACES FOR DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS - A method and an apparatus for a parallel computing program calling APIs (application programming interfaces) in a host processor to perform a data processing task in parallel among compute units are described. The compute units are coupled to the host processor including central processing units (CPUs) and graphic processing units (GPUs). A program object corresponding to a source code for the data processing task is generated in a memory coupled to the host processor according to the API calls. Executable codes for the compute units are generated from the program object according to the API calls to be loaded for concurrent execution among the compute units to perform the data processing task. | 06-30-2016 |
20160188374 | METHOD AND SYSTEM FOR APPLICATION PROFILING FOR PURPOSES OF DEFINING RESOURCE REQUIREMENTS - Disclosed are a method of and system for profiling a computer program. The method comprises the steps of using a utility application to execute the computer program; and on the basis of said execution of the computer program, identifying specific performance requirements of the computer program. A profile of the computer program is determined from said identified performance requirements; and based on said determined profile, resources for the computer program are selected from a grid of computer services. | 06-30-2016 |
20160188375 | ENERGY EFFICIENT SUPERCOMPUTER JOB ALLOCATION - A technique for defragmenting jobs on processor-based computing resources including: (i) determining a first defragmentation condition, which first defragmentation condition will be determined to exist when it is favorable under a first energy consideration to defragment the allocation of jobs as among a set of processor-based computing resources of a supercomputer (for example, a compute-card-based supercomputer); and (ii) on condition that the first defragmentation condition exists, defragmenting the jobs on the set of processor-based computing resources. | 06-30-2016 |
20160188377 | CLASSIFICATION BASED AUTOMATED INSTANCE MANAGEMENT - Systems, apparatuses, and methods for classification based automated instance management are disclosed. Classification based automated instance management may include automatically commissioning an application instance based on a plurality of classification metrics, and automatically monitoring the application instance based on the plurality of classification metrics. Automatically monitoring the application instance may include identifying a plurality of instance monitoring policies associated with the application instance based on the plurality of classification metrics. Automatically monitoring the application instance may include automatically suspending the application instance plurality of instance monitoring policies and automatically decommissioning the application based on the plurality of instance monitoring policies. | 06-30-2016 |
20160188380 | PROGRESS METERS IN PARALLEL COMPUTING - Systems and methods may provide a set of cores capable of parallel execution of threads. Each of the cores may run code that is provided with a progress meter that calculates the amount of work remaining to be performed on threads as they run on their respective cores. The data may be collected continuously, and may be used to alter the frequency, speed or other operating characteristic of the cores as well as groups of cores. The progress meters may be annotated into existing code. | 06-30-2016 |
20160196165 | PROCESSOR POWER OPTIMIZATION WITH RESPONSE TIME ASSURANCE | 07-07-2016 |
20160196167 | Application Load and Type Adaptive Manycore Processor Architecture | 07-07-2016 |
20160196168 | VIRTUAL RESOURCE CONTROL SYSTEM AND VIRTUAL RESOURCE CONTROL METHOD | 07-07-2016 |
20160203022 | DYNAMIC SHARING OF UNUSED BANDWIDTH CAPACITY OF VIRTUALIZED INPUT/OUTPUT ADAPTERS | 07-14-2016 |
20160203023 | COMPUTER SYSTEM USING PARTIALLY FUNCTIONAL PROCESSOR CORE | 07-14-2016 |
20160203025 | METHODS AND SYSTEMS TO IDENTIFY AND MIGRATE THREADS AMONG SYSTEM NODES BASED ON SYSTEM PERFORMANCE METRICS | 07-14-2016 |
20160203026 | PROCESSING A HYBRID FLOW ASSOCIATED WITH A SERVICE CLASS | 07-14-2016 |
20160203028 | METHOD AND SYSTEM FOR MODELING AND ANALYZING COMPUTING RESOURCE REQUIREMENTS OF SOFTWARE APPLICATIONS IN A SHARED AND DISTRIBUTED COMPUTING ENVIRONMENT | 07-14-2016 |
20160203031 | FRAMEWORK TO IMPROVE PARALLEL JOB WORKFLOW | 07-14-2016 |
20160253212 | THREAD AND DATA ASSIGNMENT IN MULTI-CORE PROCESSORS | 09-01-2016 |
20160253213 | METHOD AND SYSTEM FOR DEDICATING PROCESSORS FOR DESIRED TASKS | 09-01-2016 |
20160253215 | RESOURCE CONSUMPTION OPTIMIZATION | 09-01-2016 |
20160253217 | METHOD OF SCHEDULING THREADS FOR EXECUTION ON MULTIPLE PROCESSORS WITHIN AN INFORMATION HANDLING SYSTEM | 09-01-2016 |
20160378553 | Resource Management Method and Device for Terminal System - The present document relates to a system resource management method and device for a terminal. The method includes: partitioning a memory chip of the terminal into a customized data partition and at least one operating system partition, the customized data partition being used for storing system characteristic resource data, and the operating system partition being used for storing system general function resource data; and respectively managing the resource data of the customized data partition and the at least one operating system partition, and sharing the resource data of the customized data partition in the at least one operating system partition. The present document avoids the influence of system operation and update on customized data, reduces the system maintenance complexity and operating cost of the terminal, and at the same time decreases the download traffic of update data. | 12-29-2016 |
20160378554 | Parallel and Distributed Computing Using Multiple Virtual Machines - Systems and techniques are described for using virtual machines to write parallel and distributed applications. One of the techniques includes receiving a job request, wherein the job request specifies a first job to be performed by a plurality of a special purpose virtual machines, wherein the first job includes a plurality of tasks; selecting a parent special purpose virtual machine from a plurality of parent special purpose virtual machines to perform the first job; instantiating a plurality of child special purpose virtual machines from the selected parent special purpose virtual machine; partitioning the plurality of tasks among the plurality of child special purpose virtual machines by assigning one or more of the plurality of tasks to each of the child special purpose virtual machines; and performing the first job by causing each of the child special purpose virtual machines to execute the tasks assigned to the child special purpose virtual machine. | 12-29-2016 |
20160378555 | GENERATING TIMING SEQUENCE FOR ACTIVATING RESOURCES LINKED THROUGH TIME DEPENDENCY RELATIONSHIPS - A method, and associated computer program product and computer system. A Direct Acyclic Graph (DAG) includes nodes and directed edges. Each node represents a unique resource and is a predefined Recovery Time Objective (RTO) node or an undefined RTO node. Each directed edge directly connects two nodes and represents a time delay between the two nodes. The nodes are topologically sorted to order the nodes in a dependency sequence of ordered nodes. A corrected RTO is computed for each ordered node. | 12-29-2016 |
20160378559 | EXECUTING A FOREIGN PROGRAM ON A PARALLEL COMPUTING SYSTEM - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a distributed parallel computing system to adapt a foreign program to execute on the distributed parallel computing system. The foreign program is a program written for a computing framework that is different from a computing framework of the parallel computing system. The distributed parallel computing system includes a master node computer and one or more worker node computers. A scheduler executing on the master node computer acts as an intermediary between the foreign program and the parallel computing system. The scheduler negotiates with a resource manager of the parallel computing system to acquire computing resources. The scheduler then allocates the computing resources to the worker node computers as containers. The foreign program executes in the containers on the worker node computers in parallel. | 12-29-2016 |
20160378560 | EXECUTING A FOREIGN PROGRAM ON A PARALLEL COMPUTING SYSTEM - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for a task centric resource scheduling framework. A scheduler executing on a master node computer of a distributed parallel computing system allocates computing resources of the parallel computing system to a program according to one or more policies associated with the program. Each policy includes a set of pre-determined computing resource constraints. Allocation of the computing resources includes performing multiple iterations of negotiation between the scheduler and a resource manager of the parallel computing system. In each iteration, a policy engine of the scheduler submits requests to get more resources from, or requests to release already acquired resources to, the resource manager. The policy engine generates the requests by balancing suggestions provided by analyzer components of the policy engine and a corresponding policy. The policy engine can then determine an allocation plan on how to allocate resources. | 12-29-2016 |
20160378567 | MOBILE DEVICE BASED WORKLOAD DISTRIBUTION - Mobile device based workload distribution may include determining whether a processing requirement for a workload exceeds an operational threshold of an associated mobile device, and detecting, in response to a determination that the processing requirement for the workload exceeds the operational threshold of the associated mobile device, a performance degradation of the associated mobile device. In response to the detected performance degradation of the associated mobile device, the workload may be divided into a plurality of workload portions. A workload portion of the plurality of workload portions may be distributed to a further mobile device for workload processing. Mobile device based workload distribution may further include receiving, from the further mobile device, a processed workload portion related to the distributed workload portion, and assembling the processed workload portion related to the distributed workload portion with a plurality of processed workload portions, for example, for rendering on the associated mobile device. | 12-29-2016 |
20160378569 | SYSTEM AND METHOD FOR INTELLIGENT TASK MANAGEMENT AND ROUTING - Systems and methods are shown for routing task objects to multiple agents that involve analyzing content of each task object in an input buffer to determine a classification relevant to the content of the task object that is added to task object metadata, which is placed in a second buffer. Objects in the second buffer are analyzed and the classification in the object metadata used to search workforce management data representing agent characteristics to identify agents who match the classification. A routing strategy is applied to the object to select an agent and the object is routed to the agent's workbin. Another aspect involves organizing workbin tasks objects by priority, according to recent system conditions excluding objects that cannot presently be processed based on a workflow strategy or status data and presenting remaining objects based on order of priority, or re-arranging objects between workbins based on recent status info. | 12-29-2016 |
20160378570 | Techniques for Offloading Computational Tasks between Nodes - Examples may include techniques to offload computational tasks between nods. The computational tasks offloaded based on computing resources hosted by a given node exceeding an energy state threshold and based on another node accepting the offloading of the computational task. The other node to accept the offload based on a determination that computing resources hosted by the other node do not exceed an energy state threshold for the other node when used to execute the offloaded computational task. | 12-29-2016 |
20160378571 | MANAGEMENT OF ASYNCHRONOUS AND SYNCHRONOUS RESOURCE REQUESTS - Managing requests for acquiring resources in a computing environment. A first request to acquire resources is received. Whether the resources have been pre-acquired is determined. If the resources have not been pre-acquired, a token registering interest of a first thread in the first request is subscribed to. If the acquisition of the resources is not successful, whether a prior synchronous request has been initiated by a thread for the first request is determined. If a prior synchronous request has not been initiated, a synchronous request is initiated to acquire the resources. If the resources have not been pre-acquired for a second received request, an interest is registered of a second thread in the first request using the token. If the acquisition of the one or more resources is successful, a thread is notified of the successful acquisition, and the interest of the second thread is unregistered in the first request. | 12-29-2016 |
20170235600 | SYSTEM AND METHOD FOR RUNNING APPLICATION PROCESSES | 08-17-2017 |
20170235601 | DYNAMICALLY ADAPTIVE, RESOURCE AWARE SYSTEM AND METHOD FOR SCHEDULING | 08-17-2017 |
20170235606 | SYSTEM AND METHODS FOR IMPLEMENTING CONTROL OF USE OF SHARED RESOURCE IN A MULTI-TENANT SYSTEM | 08-17-2017 |
20170235607 | METHOD FOR OPERATING SEMICONDUCTOR DEVICE AND SEMICONDUCTOR SYSTEM | 08-17-2017 |
20170235608 | AUTOMATIC RESPONSE TO INEFFICIENT JOBS IN DATA PROCESSING CLUSTERS | 08-17-2017 |
20170235609 | METHODS, SYSTEMS, AND DEVICES FOR ADAPTIVE DATA RESOURCE ASSIGNMENT AND PLACEMENT IN DISTRIBUTED DATA STORAGE SYSTEMS | 08-17-2017 |
20170235611 | PUSH SIGNALING TO RUN JOBS ON AVAILABLE SERVERS | 08-17-2017 |
20170235614 | VIRTUALIZING SENSORS | 08-17-2017 |
20170235616 | DISTRIBUTED LOAD PROCESSING USING SAMPLED CLUSTERS OF LOCATION-BASED INTERNET OF THINGS DEVICES | 08-17-2017 |
20170237809 | DIRECT ACCESS STORAGE DEVICE ANALYZER | 08-17-2017 |
20180024537 | SOFTWARE DEFINED AUTOMATION SYSTEM AND ARCHITECTURE | 01-25-2018 |
20180024859 | Performance Provisioning Using Machine Learning Based Automated Workload Classification | 01-25-2018 |
20180024860 | Technologies for Assigning Workloads Based on Resource Utilization Phases | 01-25-2018 |
20180024861 | TECHNOLOGIES FOR MANAGING ALLOCATION OF ACCELERATOR RESOURCES | 01-25-2018 |
20180024862 | PARALLEL PROCESSING SYSTEM, METHOD, AND STORAGE MEDIUM | 01-25-2018 |
20180024868 | COMPUTER RESOURCE ALLOCATION TO WORKLOADS IN AN INFORMATION TECHNOLOGY ENVIRONMENT | 01-25-2018 |
20180024951 | HETEROGENEOUS MULTI-PROCESSOR DEVICE AND METHOD OF ENABLING COHERENT DATA ACCESS WITHIN A HETEROGENEOUS MULTI-PROCESSOR DEVICE | 01-25-2018 |
20190146813 | Method for Controlling Application and Terminal Device | 05-16-2019 |
20190146835 | IMPLEMENTING COGNITIVE DYNAMIC LOGICAL PROCESSOR OPTIMIZATION SERVICE | 05-16-2019 |
20190146838 | INDEPENDENT STORAGE AND PROCESSING OF DATA WITH CENTRALIZED EVENT CONTROL | 05-16-2019 |
20190146839 | DISTRIBUTED DATA PLATFORM RESOURCE ALLOCATOR | 05-16-2019 |
20190146840 | COMPUTING RESOURCE ALLOCATION BASED ON NUMBER OF ITEMS IN A QUEUE AND CONFIGURABLE LIST OF COMPUTING RESOURCE ALLOCATION STEPS | 05-16-2019 |
20190146842 | Method and Apparatus for Allocating Computing Resources of Processor | 05-16-2019 |
20190146843 | Method for Allocating Processor Resources and Mobile Terminal | 05-16-2019 |
20190146845 | Lock Allocation Method and Apparatus, and Computing Device | 05-16-2019 |
20190146846 | Method for Controlling Application and Related Devices | 05-16-2019 |
20190146847 | DYNAMIC DISTRIBUTED RESOURCE MANAGEMENT | 05-16-2019 |
20190146848 | Adaptive Resource Management In Distributed Computing Systems | 05-16-2019 |
20190146849 | SCALABLE CLOUD-BASED TIME SERIES ANALYSIS | 05-16-2019 |
20190146851 | METHOD, DEVICE, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM FOR CREATING VIRTUAL MACHINE | 05-16-2019 |
20190146859 | TIMEOUT PROCESSING FOR MESSAGES | 05-16-2019 |
20190146997 | GENERATION OF JOB FLOW OBJECTS IN FEDERATED AREAS FROM DATA STRUCTURE | 05-16-2019 |
20190146998 | GENERATION OF JOB FLOW OBJECTS IN FEDERATED AREAS FROM DATA STRUCTURE | 05-16-2019 |
20220137986 | APPLICATION-BASED DYNAMIC HETEROGENEOUS MANY-CORE SYSTEMS AND METHODS - A method for dynamically configuring multiple processors based on needs of applications includes receiving, from an application, an acceleration request message including a task to be accelerated. The method further includes determining a type of the task and searching a database of available accelerators to dynamically select a first accelerator based on the type of the task. The method further includes sending the acceleration request message to a first acceleration interface located at a configurable processing circuit. The first acceleration interface sends the acceleration request message to a first accelerator, and the first accelerator accelerates the task upon receipt of the acceleration request message. | 05-05-2022 |
20220138003 | AUTOMATIC LOCALIZATION OF ACCELERATION IN EDGE COMPUTING ENVIRONMENTS - Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement. | 05-05-2022 |
20220138008 | METHODS AND APPARATUS TO MANAGE RESOURCES IN A HYBRID WORKLOAD DOMAIN - Methods and apparatus to manage resources in a hybrid workload domain are disclosed. An example apparatus includes a usage monitor to monitor resource utilization of a workload allocated within a hybrid workload domain, and an orchestrator to: determine a first type of the workload domain in the hybrid workload domain; in response to determining that under-utilized resources of the first type are not available, identify resources of a second type that are available; convert the resources from the first type to the second type; and allocate the converted resources to the workload. | 05-05-2022 |
20220138009 | INFORMATION PROCESSING APPARATUS, METHOD OF CONTROLLING INFORMATION PROCESSING APPARATUS, AND PROGRAM FOR CONTROLLING INFORMATION PROCESSING APPARATUS - A method, an apparatus and a medium storing a program for controlling information processing apparatus that manages a plurality of processing nodes each including a buffer and a processor that processes data held in the buffer is disclosed. The method includes predicting a boundary between processed data and unprocessed data in the buffer at a predicted reaching time at which a resource load of a certain processing node during data processing will reach a predetermined amount; and transferring, in reverse processing order toward the boundary, the unprocessed data to another processing node that will take over the data processing. | 05-05-2022 |
20220138010 | QUIESCENT STATE-BASED RECLAIMING STRATEGY FOR PROGRESSIVE CHUNKED QUEUE - A system includes a memory for storing a plurality of memory chunks and a processor for executing a plurality of producer threads. A producer thread increases a producer sequence and determines (i) a first chunk identifier associated with the producer sequence of an identified memory chunk and (ii) a position from the producer sequence to offer an item. The producer thread determines a second chunk identifier of a last created/appended memory chunk and determines whether the second chunk identifier is valid (e.g., matches the first chunk identifier). The producer thread reads a current memory chunk and determines whether a third chunk identifier associated with the current memory chunk is valid (e.g., matches the first chunk identifier). The producer thread writes the item into the identified memory chunk at the position. | 05-05-2022 |
20220138012 | Computing Resource Scheduling Method, Scheduler, Internet of Things System, and Computer Readable Medium - Various embodiments of the teachings herein include a resource scheduling method comprising: receiving data to be processed collected by a sensor in an Internet of Things system; determining a processing priority of the data to be processed; predicting, according to the determined processing priority, a computing resource amount and duration required for processing the data to be processed; and scheduling a computing resource of an edge computing device in the IoT system according to the predicted computing resource amount and duration to process the data to be processed. | 05-05-2022 |
20220138013 | WORKLOAD COMPLIANCE GOVERNOR SYSTEM - A workload compliance governor system includes a management system coupled to a computing system. A workload compliance governor subsystem in the computing system receives a workload performance request associated with a workload, exchanges hardware compose communications with the management system to compose hardware components for the workload, and receives back an identification of hardware components. The workload compliance governor subsystem then determines that the identified hardware components satisfy hardware compliance requirements for the workload, and configures the identified hardware components in the computing system based on the software compliance requirements for the workload in order to cause those identified hardware components to provide an operating system and at least one application that operate to perform the workload. | 05-05-2022 |
20220138016 | METHODS AND APPARATUS TO STORE AND ACCESS MULTI-DIMENSIONAL DATA - Methods, apparatus, systems and articles of manufacture to store and access multi-dimensional data are disclosed. An example apparatus includes a memory; a memory allocator to allocate part of the memory for storage of a multi-dimensional data object; and a storage element organizer to: separate the multi-dimensional data into storage elements; store the storage elements in the memory, the stored storage elements being selectively executable; store starting memory address locations for the storage elements in an array in the memory, the array to facilitate selectable access of data of the stored elements; store a pointer for the array into the memory. | 05-05-2022 |
20220138019 | METHOD AND SYSTEM FOR PERFORMING WORKLOADS IN A DATA CLUSTER - A method for performing workloads is performed by a recommendation engine. The method includes obtaining, by the recommendation engine, a workload; generating workload features associated with the workload; obtaining hardware specification information associated with hardware of data nodes of a data cluster; determining compliant hardware configurations of the data cluster using the workload features, the hardware specification information, and a first machine learning model; generating performance predictions associated with the compliant hardware configurations using the workload features, a portion of the hardware specification information associated with the compliant hardware configurations, and a second machine learning model; generating a recommendation using the performance predictions, and the recommendation specifies a hardware configuration of the compliant hardware configurations; sending the recommendation to the data cluster; and initiating the performance of the workload on the hardware configuration. | 05-05-2022 |
20220138020 | COMPUTING ARCHITECTURE FOR OPTIMALLY EXECUTING SERVICE REQUESTS BASED ON NODE ABILITY AND INTEREST CONFIGURATION - The present disclosure relates to a system for executing a plurality of service requests (SRs) from corresponding plurality of user computing devices, the system comprising a distributed compute (DC) that forms part of a distributed network, the DC having at least one processor that executes one or more routines stored in an operatively coupled memory to enable receipt of the plurality of service requests in a heterogeneous interaction pool, wherein the DC further comprises a system state manager (SSM) that, based on at least one common attribute of each SR in the interaction pool, identifies an appropriate node (N) from one or more available nodes that has an ability and the attribute-based interest configuration to execute the respective SR, and transmits the respective SR to the identified node (N) for execution. | 05-05-2022 |