Entries |
Document | Title | Date |
20080201712 | Method and System for Concurrent Message Processing - A method and system are provided for concurrent message processing. The system includes: an input queue capable of receiving multiple messages in a given order; an intermediary for processing the messages; and an output queue for releasing the messages from the intermediary. Means are provided for retrieving a message from an input queue for processing at the intermediary and starting a transaction under which the message is to be processed. The intermediate logic processes the transactions in parallel and a transaction management means ensures that the messages are released to the output queue in the order of the messages in the input queue. | 08-21-2008 |
20080209421 | SYSTEM AND METHOD FOR SUSPENDING TRANSACTIONS BEING EXECUTED ON DATABASES - A database management system managing one or more databases to suspend access to at least one selected database by one or more processes or applications (e.g., message processing programs, batch messaging programs, etc.). In some instances, the one or more databases may include one or more IMS databases. Access to the at least one selected database may be suspended to enable one or more operations to be performed on the at least one selected database by the database management system and/or an outside entity (e.g., a user, an external application, etc.). For example, the one or more operations may include an imaging operation, a loading operation, an unloading operation, a start operation, a stop operation, and/or other operations. In some instances, access to the at least one selected database may be suspended without canceling transactions being executed by the one or more processes or applications on the selected at least one database. | 08-28-2008 |
20080222639 | Method and System Configured for Facilitating Management of International Trade Receivables Transactions - A receivables transaction management platform is configured for facilitating management of international trade receivables transactions. The platform includes a task manager layer and a platform functionality layer. The task manager layer is configured for facilitating management of transaction information workflow tasks and export receivables tasks. The platform functionality layer is accessible by at least a portion of the managers and is configured for enabling facilitation of the transaction information workflow tasks and the export receivables tasks. Managing the transaction information workflow tasks and export receivables tasks includes facilitating preparation of a document and data portfolio required for settlement of an international trade receivables transaction, facilitating electronic submission of the document and data portfolio to a designated recipient and facilitating acceptance of the document and data portfolio. The platform functional components are configured for enabling user workflow functionality, data mapping functionality, data analysis functionality, data storage functionality and third party access functionality. | 09-11-2008 |
20080235686 | Method and apparatus for improving thread posting efficiency in a multiprocessor data processing system - A computer implemented method, a data processing system, and computer usable program code for improving thread posting efficiency in a multiprocessor data processing system are provided. Aspects of the present invention first receive a set of threads from an application. The aspects of the present invention then group the set of threads with a plurality of processors based on a last execution of the set of threads on the plurality of processors to form a plurality of groups. The threads in each group in the plurality of groups are all last executed on a same processor. The aspects of the present invention then wake up the threads in the plurality of groups in any order. | 09-25-2008 |
20080244583 | CONFLICTING SUB-PROCESS IDENTIFICATION METHOD, APPARATUS AND COMPUTER PROGRAM - A technique for identifying conflicting sub-processes easily in a computer system that processes a plurality of transactions in parallel is provided. | 10-02-2008 |
20080250411 | RULE BASED ENGINE FOR VALIDATING FINANCIAL TRANSACTIONS - A method and system for checking whether customer orders for transactions of financial instruments conform to business logic rules. Executable rule files are created and stored in a repository. New executable rule files can be created by scripting the new business logic rules in a script file which is converted into a corresponding source code file written in a computer programming language. The source code file is compiled to create an individual executable rule file. A rule selection repository contains identification of groups of selected executable rule files. The invention determines the category of the customer order and reads, from the rule selection repository, a group of executable rule files that correspond to the identified category of the customer order. The selected executable rule files are executed to check the conformance of the customer order. Execution results are stored in a status repository for subsequent retrieval and analysis. | 10-09-2008 |
20080256541 | METHOD AND SYSTEM FOR OPTIMAL BATCHING IN A PRODUCTION ENVIRONMENT - A method for processing a plurality of jobs in a production environment may include receiving a plurality of Jobs and receiving one or more instructions into a workflow management system to process the plurality of jobs. The one or more instructions may include a setup characteristic. The method may also include clustering, by the workflow management system, the plurality of jobs into super-groups based on the setup characteristic, determining, by the workflow management system, a processing sequence based on the clustering and processing the jobs according to the determined processing sequence. | 10-16-2008 |
20080276239 | RECOVERY AND RESTART OF A BATCH APPLICATION - A method of operating a data processing system comprises executing a batch application, the executing comprising reading one or more inputs from one or more data files, performing updates on one or more records according to the or each input read from a data file, and issuing a syncpoint when said updates are completed. During the execution of the batch application, syncpoints are periodically issued and checkpoints are less frequently issued. Following detection of a failure of the batch application, the batch application is restarted with the last issued checkpoint, and the batch application is executed by reading one or more inputs from one or more data files, but not performing updates on said records, until the last issued syncpoint is reached. | 11-06-2008 |
20080282245 | Media Operational Queue Management in Storage Systems - A method for media operational queue management in disk storage systems evaluates a plurality of pending storage operations requiring a destage storage operation. A first set of the plurality of pending storage operations is organized in a first array queue grouping (AQG). The AQG is structured such that all of the storage operations are completed within a predefined latency period. A computer-implemented method manages a plurality of pending storage operations in a disk storage system. A pending operation queue is examined to determine a plurality of read and write operations for a first array. A first set of the plurality of read and write operations is grouped into a first array queue grouping (AQG). The first set of the plurality of read and write operations is sent to a redundant array of independent disks (RAID) controller adapter for processing. | 11-13-2008 |
20080295098 | System Load Based Dynamic Segmentation for Network Interface Cards | 11-27-2008 |
20080301682 | Inserting New Transactions Into a Transaction Stream - In an embodiment, a selection of an original transaction is received. In response to the selection of the original transaction, a call stack of the application that sends the original transaction during a learn mode of the application is saved. A specification of a new transaction and a location of the new transaction with respect to the original transaction in an transaction stream is received. During a production mode of the application, the original transaction is received from the application. A determination is made that the call stack of the application during the production mode matches the saved call stack of the application during the learn mode. In response to the determination, the new transaction is inserted at the location into a transaction stream that is sent to a database. | 12-04-2008 |
20080307417 | Document registration system, information processing apparatus, and computer usable medium therefor - A document registration system for registering a plurality of electronic documents is provided. The document registration system includes an information processing apparatus having a display unit and a storage unit. The information processing apparatus is provided with a registration unit, which can be operated to perform a registration process to register the electronic documents in an interactive processing mode, wherein the electronic documents are registered manually, and in a batch processing mode, wherein the electronic documents are registered automatically in a batch, and a first switching unit to mutually switch activation of the interactive processing mode and the batch processing mode. | 12-11-2008 |
20080307418 | Enabling and Disabling Byte Code Inserted Probes Based on Transaction Monitoring Tokens - A method of enabling transaction probes used to monitor a transaction or modify a primary application handling the transaction. The method begins with retrieving a token associated with the transaction. The token contains information regarding which transaction probes from a plurality of transaction probes will be enabled with respect to the transaction. The token is then read to determine the set of transaction probes from the plurality of transaction probes that will be enabled. The determined set of transaction probes is then enabled. | 12-11-2008 |
20080320476 | VARIOUS METHODS AND APPARATUS TO SUPPORT OUTSTANDING REQUESTS TO MULTIPLE TARGETS WHILE MAINTAINING TRANSACTION ORDERING - A method, apparatus, and system are described, which generally relate to an integrated circuit having an interconnect that implements internal controls. The interconnect in an integrated circuit communicates transactions between initiator Intellectual Property (IP) cores and target IP cores coupled to the interconnect. The interconnect implements logic configured to support multiple transactions issued from a first initiator IP core to the multiple target IP cores while maintaining an expected execution order within the transactions. The logic supports a second transaction to be issued from the first initiator IP core to a second target IP core before a first transaction issued from the same first initiator IP core to a first target IP core has completed while ensuring that the first transaction completes before the second transaction and while ensuring an expected execution order within the first transaction and second transaction are maintained. The logic does not include any reorder buffering. | 12-25-2008 |
20090007118 | Native Virtualization on a Partially Trusted Adapter Using PCI Host Bus, Device, and Function Number for Identification - A mechanism that allows a single physical I/O adapter, such as a PCI, PCI-X, or PCI-E adapter, to perform I/O transactions using the PCI host bus, device, and function numbers to validate that an I/O transaction originated from the proper host is provided. Additionally, a method for facilitating identification of a transaction source partition is provided. An input/output transaction that is directed to a physical adapter is originated from a system image of a plurality of system images. The host data processing system adds an identifier of the system image to the input/output transaction. The input/output transaction is then conveyed to the physical adapter for processing of the input/output transaction. | 01-01-2009 |
20090024997 | Batch processing apparatus - There are provided a batch processing apparatus and a batch processing method capable of significantly reducing the burden on a system designer, a system administrator, and an operator operating the system as well as significantly reducing the development cost. The batch processing apparatus acquires from a repository the metadata defined as information on at least data item name, input, processing content, and output, as well as information stored and registered in advance in the predetermined repository, inputs input data according to a declaration process of the acquired metadata, creates output data by processing the input data, and outputs the output data. Herein, the batch processing apparatus creates the output data by changing all the output data related to the metadata according to change of the metadata. | 01-22-2009 |
20090024998 | INITIATION OF BATCH JOBS IN MESSAGE QUEUING INFORMATION SYSTEMS - A method, system, and computer program product for initiating batch jobs in a message queuing information system are provided. The method, system, and computer program product provide for monitoring a message queue in the message queuing information system, detecting a predetermined condition in the message queue, determining whether a member name is associated with the predetermined condition, determining whether a server is available responsive to a member name being associated with the predetermined condition, and sending the member name to the server for the server to attach a batch job to load or unload one or more messages in the message queue based on information included in the member name responsive to a server being available. | 01-22-2009 |
20090031308 | Method And Apparatus For Executing Multiple Simulations on a Supercomputer - A supercomputer processing system is provided that is configured to execute a plurality of simulations through transaction processing. The supercomputer processing system includes a supercomputer configured to execute a first simulation of the plurality of simulations and generate an output based upon execution of the first simulation, and a transaction hub. The transaction hub includes a relational database configured to store the output of the first simulation, and an application server having a service-oriented architecture (SOA) that supports an event triggering service. The event triggering service is configured to detect the output of the first simulation and automatically trigger the supercomputer to execute a second simulation of the plurality of simulations using the output of the first simulation stored in the relational database. | 01-29-2009 |
20090031309 | System and Method for Split Hardware Transactions - A split hardware transaction may split an atomic block of code to be executed using multiple hardware transactions, while logically taking effect as a single atomic transaction. A split hardware transaction may use software to combine the multiple hardware transactions into one logically atomic operation. In some embodiments, a split hardware transaction may allow execution of atomic blocks including non-hardware-transactionable (NHT) operations without resorting to exclusively software transactions. A split hardware transaction may maintain a thread-local buffer logs all memory accesses performed by the split hardware transaction. A split hardware transaction may use a hardware transaction to copy values read from shared memory locations into a local memory buffer. To execute a non-hardware-transactionable operation, the split hardware transaction may commit the active hardware transaction, execute the non-hardware-transactionable operation, and then initiate a new hardware transaction to execute the rest of the atomic block. | 01-29-2009 |
20090031310 | System and Method for Executing Nested Atomic Blocks Using Split Hardware Transactions - Split hardware transaction techniques may support execution of serial and parallel nesting of code within an atomic block to an arbitrary nesting depth. An atomic block including child code sequences nested within a parent code sequence may be executed using separate hardware transactions for each child, but the execution of the parent code sequence, the child code sequences, and other code within the atomic block may appear to have been executed as a single transaction. If a child transaction fails, it may be retried without retrying the parent code sequence or other child code sequences. Before a child transaction is executed, a determination of memory consistency may be made. If a memory inconsistency is detected, the child transaction may be retried or control may be returned to its parent. Memory inconsistencies between parallel child transactions may be resolved by serializing their execution before retrying at least one of them. | 01-29-2009 |
20090031311 | PROCESSING TECHNIQUES FOR SERVERS HANDLING CLIENT/SERVER TRAFFIC AND COMMUNICATIONS - The present invention relates to a system for handling client/server traffic and communications pertaining to the delivery of hypertext information to a client. The system includes a central server which processes a request for a web page from a client. The central server is in communication with a number of processing/storage entities, such as an annotation means, a cache, and a number of servers which provide identification information. The system operates by receiving a request for a web page from a client. The cache is then examined to determine whether information for the requested web page is available. If such information is available, it is forwarded promptly to the client for display. Otherwise, the central server retrieves the relevant information for the requested web page from the pertinent server. The relevant information is then processed by the annotation means to generate additional relevant computer information that can be incorporated to create an annotated version of the requested web page which includes additional displayable hypertext information. The central server then relays the additional relevant computer information to the client so as to allow the annotated version of the requested web page to be displayed. In addition, the central server can update the cache with information from the annotated version. The central server can also interact with different servers to collect and maintain statistical usage information. In handling its communications with various processing/storage entities, the operating system running behind the central server utilizes a pool of persistent threads and an independent task queue to improve the efficiency of the central server. A task needs to have a thread assigned to it before the task can be executed. The pool of threads are continually maintained and monitored by the operating system. Whenever a thread is available, the operating system identifies the next executable task in the task queue and assigns the available thread to such task so as to allow it to be executed. Upon conclusion of the task execution, the assigned thread is released back into the thread pool. An additional I/O queue for specifically handling input/output tasks can also be used to further improve the efficiency of the central server. | 01-29-2009 |
20090037913 | METHODS AND SYSTEMS FOR COORDINATED TRANSACTIONS - Automated techniques are disclosed for coordinating request or transaction processing in a data processing system. For example, a technique for handling requests in a data processing system comprises the following steps. A compound request comprising at least two individual requests of different types is received. An individual request r | 02-05-2009 |
20090037914 | AUTOMATIC CONFIGURATION OF ROBOTIC TRANSACTION PLAYBACK THROUGH ANALYSIS OF PREVIOUSLY COLLECTED TRAFFIC PATTERNS - A system and method which accesses or otherwise received collected performance data for at least one server application, where the server application capable of performing a plurality of transactions with client devices and the client devices are geographically dispersed from the server in known geographical locales, which automatically determines from the performance data which of the transactions are utilized by users of the client devices, which selects utilized transactions according to at least one pre-determined selection criteria, which automatically generates a transaction playback script for each of the selected transactions substituting test information in place of user-supplied or user-unique information in the transactions, which designates each script for execution from a geographical locale corresponding to the locale of the clients which execute said utilized transactions, which deploys the playback scripts to robotic agents geographically co-located with client devices according to the locale designation, and which executes the playback scripts from the robotic agents in order to exercise the server application across similar network topologies and under realistic conditions. | 02-05-2009 |
20090037915 | Staging block-based transactions - In one embodiment, the present invention includes a method for converting a write request from a file system transaction to a transaction record, forwarding the transaction record to a non-volatile storage for storage, where the transaction record has a different protocol than the file system transaction, and later forwarding it to the target storage. Other embodiments are described and claimed. | 02-05-2009 |
20090049444 | Service request execution architecture for a communications service provider - A service request execution architecture promotes acceptance and use of self-service provisioning by consumers, leading to increased revenue and cost savings for the service provider as consumers order additional services. The architecture greatly reduces the technical burden of managing exceptions that occur while processing requests for services. The architecture accelerates the process of fulfilling requests for services by efficiently and effectively reducing the system resources needed to process exceptions by eliminating redundant exceptions corresponding to related service requests. | 02-19-2009 |
20090055824 | TASK INITIATOR AND METHOD FOR INITIATING TASKS FOR A VEHICLE INFORMATION SYSTEM - Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to. | 02-26-2009 |
20090064147 | TRANSACTION AGGREGATION TO INCREASE TRANSACTION PROCESSING THROUGHOUT - Provided are techniques for increasing transaction processing throughput. A transaction item with a message identifier and a session identifier is obtained. The transaction item is added to an earliest aggregated transaction in a list of aggregated transactions in which no other transaction item as the same session identifier. A first aggregated transaction in the list of aggregated transactions that has met execution criteria is executed. In response to determining that the aggregated transaction is not committing, the aggregated transaction is broken up into multiple smaller aggregated transactions and a target size of each aggregated transaction is adjusted based on measurements of system throughput. | 03-05-2009 |
20090064148 | Linking Transactions with Separate Systems - Methods and apparatuses enable linking stateful transactions with multiple separate systems. The first and second stateful transactions are associated with a transaction identifier. Real time data from each of the multiple systems is concurrently presented within a single operation context to provide a transparent user experience. Context data may be passed from one system to another to provide a context in which operations in the separate systems can be linked. | 03-05-2009 |
20090064149 | Latency coverage and adoption to multiprocessor test generator template creation - A multi-core multi-node processor system has a plurality of multiprocessor nodes, each including a plurality of microprocessor cores. The plurality of microprocessor nodes and cores are connected and form a transactional communication network. The multi-core multi-node processor system has further one or more buffer units collecting transaction data relating to transactions sent from one core to another core. An agent is included which calculates latency data from the collected transaction data, processes the calculated latency data to gather transaction latency coverage data, and creates random test generator templates from the gathered transaction latency coverage data. The transaction latency coverage data indicates at least the latencies of the transactions detected during collection of the transaction data having a pre-determined latency, and includes, for example, four components for transaction type latency, transaction sequence latency, transaction overlap latency, and packet distance latency. Thus, random test generator templates may be created using latency coverage. | 03-05-2009 |
20090064150 | Process Manager - A process manager ( | 03-05-2009 |
20090077554 | APPARATUS, SYSTEM, AND METHOD FOR DYNAMIC ADDRESS TRACKING - An apparatus, system, and method are disclosed for dynamic address tracking. A token module creates a token for a job that accesses data in a storage system comprising a plurality of storage devices. The token comprises a job name. The job is a batch job. A storage module stores location information for the data accessed by the job in a token table. The location information is indexed by the token. In addition, the location information includes an input/output device name, an address space, a data set name, and a storage device name. A communication module receives a diagnostic command comprising the job name. The token module reconstructs the token using the job name. The storage module retrieves the location information indexed by the token in response to the diagnostic command. | 03-19-2009 |
20090077555 | TECHNIQUES FOR IMPLEMENTING SEPARATION OF DUTIES USING PRIME NUMBERS - A technique for implementing separation of duties for transactions includes determining a current task assignment number of an entity. The technique also includes determining whether the entity can perform a new task based upon the current task assignment number and a task transaction number (which is based on at least one prime number) assigned to the new task. | 03-19-2009 |
20090077556 | IMAGE MEDIA MODIFIER - A method and apparatus for back-end processing a recordable media production job after it has been generated and sent to a recordable media production system is described that intercepts the image file generation at a low level within the recordable media production system and allows for adding, deletion and modification of the underlying data files and/or modification of the production job itself under control of an external user defined process, such as an application, DLL, script or plug-in. This interception of the image file generation occurs before the final image is assembled and handed off to the media recorder/producer to be written to the recordable media and is invoked at multiple stages of reading the production job edit list, allowing changes to occur at each stage of the imaging or pre-mastering (file system creation) process. | 03-19-2009 |
20090083739 | NETWORK RESOURCE ACCESS CONTROL METHODS AND SYSTEMS USING TRANSACTIONAL ARTIFACTS - Methods and systems are provided for use with digital data processing systems to control or otherwise limit access to networked resources based, at least in part, on transactional artifacts and/or derived artifacts. | 03-26-2009 |
20090106758 | File system reliability using journaling on a storage medium - Improving file system reliability in storage mediums after a data corrupting event using file system journaling is described. In one embodiment, a method, which includes scanning beyond an active transactions region within the file system journal to locate additional valid transactions for replay to bring the storage medium into a consistent state; the scanning performed until an invalid transaction is reached. | 04-23-2009 |
20090113430 | HARDWARE DEVICE INTERFACE SUPPORTING TRANSACTION AUTHENTICATION - A hardware device interface supporting transaction authentication is described herein. At least some illustrative embodiments include a device, including an interconnect interface, and processing logic (coupled to the bus interface) that provides access to a plurality of functions of the device through the interconnect interface. A first transaction received by the device, and associated with a function of the plurality of functions, causes a request identifier within the first transaction to be assigned to the function. Access to the function is denied if a request identifier of a second transaction, subsequent to the first transaction, does not match the request identifier assigned to the function. | 04-30-2009 |
20090113431 | METHOD FOR DETERMINING PARTICIPATION IN A DISTRIBUTED TRANSACTION - A method and system for determining whether a plurality of participants who are participating in a distributed transaction have registered their intention to commit their part of the transaction with a transaction manager, the method comprising the steps of: receiving a message from a participant, the message comprising a character sequence identifying the participant and the part of the transaction which the participant is processing; analyzing the character sequence to determine whether the character sequence further comprises an identifier for identifying whether a subsequent message is to be received by a second participant; and in dependence on the identifier identifying that there are no further subsequent messages to be received, informing each of the participants to commit their part of the transaction. | 04-30-2009 |
20090119667 | METHOD AND APPARATUS FOR IMPLEMENTING TRANSACTION MEMORY - A method and apparatus for implementing transactional memory (TM). The method includes: allocating a hardware-based transaction footprint recorder to the transaction, for recording footprints of the transaction when a transaction is begun; determining that the transaction is to be switched out; and switching out the transaction, where the footprints of the switched-out transaction are still kept in the hardware-based transaction footprint recorder. According to the present invention, transaction switching is supported by TM, and the cost of conflict detection between an active transaction and a switched-out transaction is greatly reduced since the footprints of the switched-out transaction are still kept in the hardware-based transaction footprint recorder. | 05-07-2009 |
20090125906 | METHODS AND APPARATUS TO EXECUTE AN AUXILIARY RECIPE AND A BATCH RECIPE ASSOCIATED WITH A PROCESS CONTROL SYSTEM - Example methods and apparatus to execute an auxiliary recipe and a batch recipe execution are disclosed. A disclosed example method involves executing a first recipe, and before completion of execution of the first recipe, receiving an auxiliary recipe. The example method also involves determining whether the first recipe has reached an entry point at which the auxiliary recipe can be executed. The auxiliary recipe is then executed in response to determining that the first recipe has reached the entry point. | 05-14-2009 |
20090125907 | SYSTEM AND METHOD FOR THREAD HANDLING IN MULTITHREADED PARALLEL COMPUTING OF NESTED THREADS - An Explicit Multi-Threading (XMT) system and method is provided for processing multiple spawned threads associated with SPAWN-type commands of an XMT program. The method includes executing a plurality of child threads by a plurality of TCUs including a first TCU executing a child thread which is allocated to it; completing execution of the child thread by the first TCU; announcing that the first TCU is available to execute another child thread; executing by a second TCU a parent child thread that includes a nested spawn-type command for spawning additional child threads of the plurality of child threads, wherein the parent child thread is related in a parent-child relationship to the child threads that are spawned in conjunction with the nested spawn-type command; assigning a thread ID (TID) to each child thread, wherein the TID is unique with respect to the other TIDs; and allocating a new child thread to the first TCU. | 05-14-2009 |
20090158280 | Automated Execution of Business Processes Using Dual Element Events - Systems and methods for providing interaction of multiple business process events by using management and transactional events, where the management event accepts initial transaction information, maintains state information, and initiates one or more of the transactional events. One of the transactional events receives initial transactional information and state information from the management event, performs a transaction based upon the initial transactional information and the state information, and provides resulting transactional information to the management event. The management event then completes execution of the business process based upon the resulting transactional information. | 06-18-2009 |
20090158281 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - A disclosed information processing apparatus includes data processing components configured to process data; a workflow execution unit configured to request the data processing components to process the data according to a workflow defining processes to be executed to process the data and the order of the processes; and a processing component selection unit. The processing component selection unit is configured to receive a request to execute an undefined process not defined in the workflow from the workflow execution unit, the request including identification information for identifying the undefined process; to search a list for the identification information, the list associating the identification information with one of the data processing components for executing the undefined process; and if the identification information is found in the list, to request the workflow execution unit to request the one of the data processing components associated with the identification information to process the data. | 06-18-2009 |
20090164998 | Management of speculative transactions - Circuitry for receiving transaction requests from a plurality of masters and the masters themselves are disclosed. The circuitry comprises: an input port for receiving said transaction requests, at least one of said transaction requests received comprising an indicator indicating if said transaction is a speculative transaction; an output port for outputting a response to said master said transaction request was received from; and transaction control circuitry; wherein said transaction control circuitry is responsive to a speculative transaction request to determine a state of at least a portion of a data processing apparatus said circuitry is operating within and in response to said state being a predetermined state said transaction control circuitry generates a transaction cancel indicator and outputs said transaction cancel indicator as said response, said transaction cancel indicator indicating to said master that said speculative transaction will not be performed. | 06-25-2009 |
20090164999 | Job execution system, portable terminal apparatus, job execution apparatus, job data transmission and receiving methods, and recording medium - A job execution system has a portable terminal apparatus and a job execution apparatus capable of being interconnected. Job data stored in a storage of the portable terminal apparatus is automatically transmitted to the job execution apparatus, if establishment of a connection between the portable terminal apparatus and the job execution apparatus is detected on the portable terminal apparatus, or alternatively, if establishment of a connection between the portable terminal apparatus and the job execution apparatus is detected on the job execution apparatus and then a request for the job data is transmitted to the portable terminal apparatus from the job execution apparatus. | 06-25-2009 |
20090172673 | METHOD AND SYSTEM FOR MANAGING TRANSACTIONS - A method and system for managing transactions is provided. A transaction is initiated on a first data by a first entity with the first data being comprised in a basis memory. A change in the first data is moved as a second data to a transaction memory. The second data is read from the transaction memory if a request for reading the first data is received from the first entity. The first data is read from the basis memory if the request for reading the first data is received from a second entity. The write access of the second entity to the first data is locked. | 07-02-2009 |
20090172674 | MANAGING THE COMPUTER COLLECTION OF INFORMATION IN AN INFORMATION TECHNOLOGY ENVIRONMENT - The collection of information in an Information Technology environment is dynamically managed. Processing associated with a batch of requests executed to obtain information is adjusted in real-time based on whether responses to the requests executed within an allotted time frame were received. The adjustments may include adjusting the time allotted to execute a batch of requests, adjusting the number of requests in a batch, and/or adjusting the execution priority of the requests within a batch. | 07-02-2009 |
20090172675 | Re-Entrant Atomic Signaling - Systems for context switching a requestor engine during an atomic process without corrupting the atomic process. Typically an atomic process cannot be interrupted prior to completion and if it is interrupted, the process will terminated abnormally resulting in a corrupted transaction. Systems that allow for a controlled interruption of an atomic process without corruption with subsequent context switching are presented. The system consists of a context-switchable requester engine, a context switch controller, shared resource synchronizer, and a shared resource system. The system may also containing multiple local and remote context-switchable requestor engines as well as multiple local and remote shared resource systems. A method for context switching a requestor engine during an atomic process without corrupting the atomic process is also presented. | 07-02-2009 |
20090172676 | Conditional batch buffer execution - A batch computer or batch processor may implement conditional execution at the command level of the batch processor or higher. Conditional execution may involve execution of one batch buffer depending on the results achieved upon execution by another batch buffer. | 07-02-2009 |
20090172677 | Efficient State Management System - The present invention provides an efficient state management system for a complex ASIC, and applications thereof. In an embodiment, a computer-based system executes state-dependent processes. The computer-based system includes a command processor (CP) and a plurality of processing blocks. The CP receives commands in a command stream and manages a global state responsive to global context events in the command stream. The plurality of processing blocks receive the commands in the command stream and manage respective block states responsive to block context events in the command stream. Each respective processing block executes a process on data in a data stream based on the global state and the block state of the respective processing block. | 07-02-2009 |
20090172678 | Method And System For Controlling The Functionality Of A Transaction Device - A method and system of controlling the functionality of a transaction device, the method includes providing a computing device for accessing an account corresponding to the transaction device. The computing device generates a list of a plurality of transaction functions associated with the transaction device. The method includes providing an option to disable and enable one or more of the transaction functions in response to a user input. In response to the user input disabling one of the transaction functions, an instruction is generated preventing the transaction device from being used for the disabled transaction function. | 07-02-2009 |
20090178042 | Managing A Workload In A Database - Described herein is a workload manager for managing a workload in a database that includes: an admission controller operating to divide the workload into a plurality of batches, with each batch having at least one workload process to be performed in the database, and each batch having a memory requirement based on the available memory for processing workloads in the database; a scheduler operating to assign a unique priority to each of the at least one workload process in each of the plurality of batches, the unique priority provides an order in which each workload process is executed in the database; and an execution manager operating to execute the at least one workload process in each of the plurality of batches in accordance with the unique priority assigned to each workload process. | 07-09-2009 |
20090187906 | SEMI-ORDERED TRANSACTIONS - Embodiments of the present invention provide a system that facilitates transactional execution in a processor. The system starts by executing program code for a thread in a processor. Upon detecting a predetermined indicator, the system starts a transaction for a section of the program code for the thread. When starting the transaction, the system executes a checkpoint instruction. If the checkpoint instruction is a WEAK_CHECKPOINT instruction, the system executes a semi-ordered transaction. During the semi-ordered transaction, the system preserves code atomicity but not memory atomicity. Otherwise, the system executes a regular transaction. During the regular transaction, the system preserves both code atomicity and memory atomicity. | 07-23-2009 |
20090187907 | RECORDING MEDIUM IN WHICH DISTRIBUTED PROCESSING PROGRAM IS STORED, DISTRIBUTED PROCESSING APPARATUS, AND DISTRIBUTED PROCESSING METHOD - A master calculator assigns a series of processing groups to a communicable worker calculator. The master receives information about an execution time and a waiting time from the worker calculator for the series of processing groups. The computer acquires the time elapsed between transmitting the processing group transmitted to the worker calculator and receiving the execution result of the processing group from the worker calculator. The master calculates the communication time required for communication with the worker calculator on the basis of the information received and the elapsed time acquired. The master calculates the number of processings to be assigned to the worker calculator on the basis of the communication time calculated. The master generates a processing group to be assigned to the worker calculator on the basis of the number of processings calculated, and transmits the processing group generated to the worker calculator. | 07-23-2009 |
20090193420 | METHOD AND SYSTEM FOR BATCH PROCESSING FORM DATA - The input and batch processing of data for insertion in a database. In one aspect of the invention, processing input data includes receiving data for insertion into a database, the data including data fields holding data entries. At least one of the data fields is determined to be a standard field having a standard data entry, and at least one different data field is determined to have been designated a batch mode field, where each batch mode field has a plurality of associated batch mode data entries. A data record is created for each batch mode data entry of the batch mode field, where each data record includes a different batch mode data entry, and each data record includes a copy of the standard data entry. | 07-30-2009 |
20090193421 | Method For Determining The Impact Of Resource Consumption Of Batch Jobs Within A Target Processing Environment - Exemplary embodiments of the present invention provide a solution that comprises the capability to dispatch jobs to target system according to the declared resource consumption by providing a way for automatically calculating the resource consumption at a target processing system. The algorithmic solution provided can also be utilized by standalone reporting tools to calculate resource consumption offline and show resource impact based upon database query results in the event that data samples are available. The solution provided by exemplary embodiments of the present invention is obtained by reducing the resource consumption problem to an optimization problem involving a set of linear equations. | 07-30-2009 |
20090193422 | UNIVERSAL SERIAL BUS DRIVING DEVICE AND METHOD - A universal serial bus (USB) driving device electrically coupled to a data receiver is configured for driving a USB to forward data requests from the data receiver to a data transmitter for processing the data requests. The USB driving device may preset a maximum active transaction number, initialize an active transaction number, and determine if the active transaction number is less than the maximum active transaction number. The USB driving device may drive the USB to forward a data request from the data receiver to the data transmitter if the active transaction number is less than the maximum active transaction number and increase the active transaction number after the USB driving device forwards a data request to the data transmitter. A USB driving method is also provided. | 07-30-2009 |
20090199187 | CONCURRENT EXECUTION OF MULTIPLE PRIMITIVE COMMANDS IN COMMAND LINE INTERFACE - A method to concurrently execute multiple primitive commands in a command line interface (CLI) is provided. Each of a plurality of signal parameters is designated for each of a plurality of primitive commands. The plurality of primitive commands is encapsulated into a header CLI command. The CLI command is executed. | 08-06-2009 |
20090199188 | INFORMATION PROCESSING SYSTEM, COMPUTER READABLE RECORDING MEDIUM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND COMPUTER DATA SIGNAL - An information processing system includes: an administrator command restricting execution unit that executes an administrator command with a restriction, when a user not having administrative authority requests execution of the administrator command that can be executed by an administrator having the administrative authority: an execution history memory that stores the execution history of the administrator command executed by the administrator command restricting execution unit: and a state changing unit that, upon receipt of an acceptance of the execution history, puts the result of execution of the administrator command shown in the execution history and executed by the administrator command restricting execution unit, into the state that is observed where the administrator command shown in the execution history is executed without the restriction. | 08-06-2009 |
20090204969 | TRANSACTIONAL MEMORY WITH DYNAMIC SEPARATION - Strong semantics are provided to programs that are correctly synchronized in their use of transactions by using dynamic separation of objects that are accessed in transactions from those accessed outside transactions. At run-time, operations are performed to identify transitions between these protected and unprotected modes of access. Dynamic separation permits a range of hardware-based and software-based implementations which allow non-conflicting transactions to execute and commit in parallel. A run-time checking tool, analogous to a data-race detector, may be provided to test dynamic separation of transacted data and non-transacted data. Dynamic separation may be used in an asynchronous I/O library. | 08-13-2009 |
20090217272 | Method and Computer Program Product for Batch Processing - A method and computer program product for batch processing, the method includes: receiving a representation of a batch job that comprises a business logic portion and a non business logic portion; generating in real time business logic batch transactions in response to the representation of the batch job; and executing business logic batch transactions and online transactions; wherein the executing of business logic batch transactions is responsive to resource information and timing information. | 08-27-2009 |
20090217273 | CONTROLLING INTERFERENCE IN SHARED MEMORY SYSTEMS USING PARALLELISM-AWARE BATCH SCHEDULING - A “request scheduler” provides techniques for batching and scheduling buffered thread requests for access to shared memory in a general-purpose computer system. Thread-fairness is provided while preventing short- and long-term thread starvation by using “request batching.” Batching periodically groups outstanding requests from a memory request buffer into larger units termed “batches” that have higher priority than all other buffered requests. Each “batch” may include some maximum number of requests for each bank of the shared memory and for some or all concurrent threads. Further, average thread stall times are reduced by using computed thread rankings in scheduling request servicing from the shared memory. In various embodiments, requests from higher ranked threads are prioritized over requests from lower ranked threads. In various embodiments, a parallelism-aware memory access scheduling policy improves intra-thread bank-level parallelism. Further, rank-based request scheduling may be performed with or without batching. | 08-27-2009 |
20090217274 | APPARATUS AND METHOD FOR LOG BASED REPLICATION OF DISTRIBUTED TRANSACTIONS USING GLOBALLY ACKNOWLEDGED COMMITS - A computer readable storage medium includes executable instructions to read source node transaction logs to capture transaction data, including local transaction data, global transaction identifiers and participating node data. The global transaction identifiers and participating node data are stored in target node queues. The target node queues are accessed to form global transaction data. Target tables are constructed based upon the local transaction data and the global transaction data. | 08-27-2009 |
20090222821 | Non-Saturating Fairness Protocol and Method for NACKing Systems - Processing transaction requests in a shared memory multi-processor computer network is described. A transaction request is received at a servicing agent from a requesting agent. The transaction request includes a request priority associated with a transaction urgency generated by the requesting agent. The servicing agent provides an assigned priority to the transaction request based on the request priority, and then compares the assigned priority to an existing service level at the servicing agent to determine whether to complete or reject the transaction request. A reply message from the servicing agent to the requesting agent is generated to indicate whether the transaction request was completed or rejected, and to provide reply fairness state data for rejected transaction requests. | 09-03-2009 |
20090222822 | Nested Queued Transaction Manager - A method and apparatus that manages transactions during a data migration. The transfer of data from an old database to a new database is structured as a set of small transactions. The transactions can be structured in a hierarchy of dependent transactions such that the transactions are nested or similarly hierarchical. A migration manager includes a set of transaction management methods or processes that enable the processing of the nested transactions thereby providing a higher level of granularity in transaction size and providing the ability to rollback small individual transactions as well as affected related transactions. The transaction management methods and processes manage a set of queues that are utilized by the migration manager to generate and execute the nested transactions. | 09-03-2009 |
20090222823 | QUEUED TRANSACTION PROCESSING - A method, system, and computer program product for processing a transaction between a client and an application server asynchronously in a distributed transaction processing environment having at least one transaction queue manager. An application request is received from a client to initiate a transaction. The request is placed in a transaction request queue by the transaction queue manager. The request is processed at the application server asynchronously relative to the receipt of the request. A response to the request is determined, and the response is placed in a transaction response queue for retrieval by the client. | 09-03-2009 |
20090222824 | Distributed transactions on mobile phones - A message is received by a mobile phone via a messaging service provided by a mobile network operator, wherein the messaging service is supported by the mobile phone. It is determined whether the message is associated with a distributed transaction. The message is forwarded to a resource manager resident on the mobile phone if the message is associated with the distributed transaction. The resource manager performs an action upon receiving the message based on contents of the message, wherein the action is associated with the distributed transaction. | 09-03-2009 |
20090235258 | Multi-Thread Peripheral Processing Using Dedicated Peripheral Bus - One embodiment of the present invention performs peripheral operations in a multi-thread processor. A peripheral bus is coupled to a peripheral unit to transfer peripheral information including a command message specifying a peripheral operation. A processing slice is coupled to the peripheral bus to execute a plurality of threads. The plurality of threads includes a first thread sending the command message to the peripheral unit. | 09-17-2009 |
20090241117 | METHOD FOR INTEGRATING FLOW ORCHESTRATION AND SCHEDULING FOR A BATCH OF WORKFLOWS - Techniques for executing a batch of one or more workflows on one or more domains are provided. The techniques include receiving a request for workflow execution, sending at least one of one or more individual jobs in each workflow and dependency information to a scheduler, computing, by the scheduler, one or more outputs, wherein the one or more outputs are based on one or more performance objectives, and integrating orchestration of one or more workflows and scheduling of at least one of one or more jobs and one or more data transfers, wherein the integrating is used to execute a batch of one or more workflows based on at least one of one or more outputs of the scheduler, static information and run-time information. | 09-24-2009 |
20090241118 | SYSTEM AND METHOD FOR PROCESSING INTERFACE REQUESTS IN BATCH - A batch messaging management system configured to process incoming request messages and provide reply messages in an efficient manner is disclosed. Instead of treating individual requests as individual transactions, the system reduces processing overhead within a mainframe computing environment by storing requests within a queue, spawning batch jobs according to the queue and processing multiple transactions using batch job processing. | 09-24-2009 |
20090249342 | SYSTEMS AND METHODS FOR TRANSACTION QUEUE ANALYSIS - A method and system for determining a wait time for a transaction queue is disclosed. In the method, video data related to a first transaction queue is received. The video data is processed to determine a number of items presented by a first entity for a transaction in the first transaction queue. A total transaction time is estimated for the first entity based on the number of items presented by the first entity and a transaction time for each of the number of items. A wait time for the first transaction queue is determined based on the estimated total transaction time for the first entity. If the wait time for the first transaction queue is greater than a first threshold, then the availability of a second transaction queue is indicated to a second entity. | 10-01-2009 |
20090254905 | FACILITATING TRANSACTIONAL EXECUTION IN A PROCESSOR THAT SUPPORTS SIMULTANEOUS SPECULATIVE THREADING - Embodiments of the present invention provide a system that executes a transaction on a simultaneous speculative threading (SST) processor. In these embodiments, the processor includes a primary strand and a subordinate strand. Upon encountering a transaction with the primary strand while executing instructions non-transactionally, the processor checkpoints the primary strand and executes the transaction with the primary strand while continuing to non-transactionally execute deferred instructions with the subordinate strand. When the subordinate strand non-transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate the first strand ID. When the primary strand transactionally accesses a cache line during the transaction, the processor updates a record for the cache line to indicate a second strand ID. | 10-08-2009 |
20090254906 | METHOD AND APPARATUS FOR ENABLING ENTERPRISE PROJECT MANAGEMENT WITH SERVICE ORIENTED RESOURCE AND USING A PROCESS PROFILING FRAMEWORD - A service-oriented architecture for enterprise project management integrates business processes, human resources and project management within an enterprise or across the value chain network. A representation having direction and attributes is provided to show the dependencies between a business value layer and a project-portfolio layer, and between the project-portfolio layer and resources. The representation is mapped to a Web Services representation in UDDI, Web Services interfaces, and Web Services based business processes through rope hyper-linking. | 10-08-2009 |
20090260011 | COMMAND LINE TRANSACTIONS - A computer system with a command shell that supports execution of commands within transactions. The command shell responds to commands that start, complete or undo transactions. To support transactions, the command shell may maintain and provide transaction state information. The command shell may interact with a transaction manager that interfaces with resource managers that process transacted instructions within transacted task modules to commit or roll back transacted instructions from those task modules based on transaction state information maintained by the shell. Parameters associated with commands can control behavior in association with transaction process, including supporting nesting transactions and non-nested transactions and bypassing transacted processing in some instances of a command. | 10-15-2009 |
20090265710 | Mechanism to Enable and Ensure Failover Integrity and High Availability of Batch Processing - A method, system and computer program product for managing a batch processing job is presented. The method includes partitioning a batch processing job for execution by a cluster of computers. One of the computers from the cluster of computers is designated as a primary command server that oversees and coordinates execution of the batch processing job. Stored in an object data grid structure in the primary command server is an alarm setpoint, boundaries, waiting batch processes and executing batch process states. The object data grid structure is replicated and stored as a replicated object grid structure in a failover command server. If the primary command server fails, the failover command server freezes all of the currently executing batch processes, interrogates processing states of the cluster of computers, and restarts execution of the batch processes in the cluster of computers in accordance with the processing states of the cluster of computers. | 10-22-2009 |
20090276777 | Multiple Programs for Efficient State Transitions on Multi-Threaded Processors - A system and method to optimize processor performance and minimizing average thread latency by selectively loading a cache when a program state, resources required for execution of a program or the program itself change, is described. An embodiment of the invention supports a “cache priming program” that is selectively executed for a first thread/program/sub-routine of each process. Such a program is optimized for situations when instructions and other program data are not yet resident in cache(s), and/or whenever resources required for program execution or the program itself changes. By pre-loading the cache with two resources required for two instructions for only a first thread, average thread latency is reduced because the resources are already present in the cache. Since, such a mechanism is carried out only for one thread in a program cycle, pitfalls of a conventional general pre-fetch scheme that involves parsing of the program in advance to determine which resources and instructions will be needed at a later time, are avoided. | 11-05-2009 |
20090282409 | METHOD, SYSTEM AND PROGRAM PRODUCT FOR GROUPING RELATED PROGRAM SEQUENCES - The invention resides in a method, system and program product for grouping related program sequences for performing a task. The method includes establishing, using a first code for grouping, one or more groups that can be formed between one or more related group-elements obtained from a plurality of groupable program flow documents, and executing, using a group program sequence engine, the groupable program flow documents, wherein each group-element considered an ancestor group-element of a group established and validated by the first code is executed before executing a related group-element obtained from the group, and wherein the related group-element of the group is executed only once during execution of the groupable program flow documents for performing the task. In an embodiment, the establishing step includes identifying a name attribute specified in the one or more related group-elements for establishing the one or more groups. | 11-12-2009 |
20090282410 | Systems and Methods for Supporting Software Transactional Memory Using Inconsistency-Aware Compilers and Libraries - Systems and methods to reduce overhead associated with read set consistency validation in software transactional memory implementations are disclosed. These systems and methods may employ an inconsistency-aware compiler-library technique, in which an inconsistency-aware compiler communicates to various inconsistency-aware library functions knowledge about whether a given transaction has read consistent values to date. The inconsistency-aware library functions may exploit this information to avoid the need to validate the transaction, or portions thereof. If read set values are known to be consistent prior to the function call, the compiler may pass a parameter value to the function indicating as much. Otherwise, it may pass a value indicating that the read set values may be inconsistent. An inconsistency-aware function may determine that it will not perform a dangerous action, even though its parameters may not be consistent. Otherwise, the inconsistency-aware function may invoke a validation operation, or may perform other error avoidance operations. | 11-12-2009 |
20090300622 | DISTRIBUTED TRANSACTION PROCESSING SYSTEM - An infrastructure and method for processing a transaction using a plurality of target systems. A method is disclosed including: generating a request from a source system, wherein the request includes an initial identifier and a counter value; submitting the request to at least two target systems; processing the request at a first target system and ignoring the request at a second target system based on the initial identifier; submitting a resubmitted request to the at least two target systems if a timely response is not received by the source system, wherein the resubmitted request includes an incremented counter value; and processing the resubmitted request by only one of the first and second target systems based on the incremented counter value. | 12-03-2009 |
20090307695 | APPARATUS, AND ASSOCIATED METHOD, FOR HANDLING CONTENT PURSUANT TO TRANSFER BETWEEN ENTERPRISE CONTENT MANAGEMENT REPOSITORIES - An apparatus, and an associated method, for facilitating bulk transfer of large volumes of data-center, ECM repository-stored content. Multiple, simultaneous threads or tasks are concurrently run both to import and to export content, as desired. A controller controls the running of the tasks and is connected to a thread container that runs the tasks by way of a TCP/IP socket or other suitable communication connection. | 12-10-2009 |
20090313628 | DYNAMICALLY BATCHING REMOTE OBJECT MODEL COMMANDS - A client-server architecture provides mechanisms to assist in minimizing round trips between a client and server. The architecture exposes an object model for client use that is structured similarly to the server based object model. The client batches commands and then determines when to execute the batched commands on the server. Proxy objects act as proxies for objects and serve as a way to suggest additional data retrieval operations for objects which have not been retrieved. Conditional logic and exceptions may be handled on the server without requiring additional roundtrips between the client and server. | 12-17-2009 |
20090328043 | INFRASTRUCTURE OF DATA SUMMARIZATION INCLUDING LIGHT PROGRAMS AND HELPER STEPS - A method of summarizing data includes providing a multi-method summarization program including instructions for summarizing data for a transaction processing system. At least one functional aspect of the transaction processing system for which a summarization of a subset of the data is desired is determined. The functional subset to a user as a light summarization program is exposed. The dependencies of the functional subset can be enforced at runtime allowing packaging flexibility. A method for efficient parallel processing involving not necessarily filled requests for help. | 12-31-2009 |
20090328044 | Transfer of Event Logs for Replication of Executing Programs - A mechanism for replicating programs executing on a computer system having a first storage means is provided. The mechanism identifies the events corresponding to requests from one executing program, which may be different from the executing program to be replicated, which are non-deterministic and identifies the ‘Non Abortable Events’ (NAE's), which change irremediably the state of the external world that need to be reproduced in the replay of the programs. These events are immediately transferred for replay and the executing program is blocked until the transfer is acknowledged. For the other non-deterministic events, they are logged and sent to the executing program, the executing programs remaining blocked only if the log is full and/or if a timer between two NAEs expires, in this case a log transfer to the standby machine is performed to prepare replication before unblocking of the executing program. | 12-31-2009 |
20100023945 | EARLY ISSUE OF TRANSACTION ID - Early issue of transaction ID is disclosed. An apparatus comprising decoder to generate a first node ID indicative of the destination of a cache transaction from a caching agent, a transaction ID allocation logic coupled to and operating in parallel to the decoder to select a transaction ID (TID) for the transaction based on the first node ID, a packet creation unit to create a packet that includes the transaction, the first node ID, the TID and a second node ID corresponding to the requestor. | 01-28-2010 |
20100042998 | ONLINE BATCH EXECUTION - Online batch processing. A job request is received from a user for processing . The job request includes a job configuration and a plurality of operations to process the data. The job configuration is extracted from the job request and stored in a configuration cache. A metadata configuration code is extracted from the job configuration and stored in a code cache. A runtime configuration code is extracted from the job configuration and stored in an instance cache. This allows information to be obtained from the configuration cache, the code cache and the instance cache for processing subsequent job requests with the similar job configuration and the plurality of operations. The data is fetched from at least one of the job request and an external storage device. The plurality of operations is executed on the data to generate a result. The result is provided to the user through at least one of an output stream and the external storage device. | 02-18-2010 |
20100042999 | TRANSACTIONAL QUALITY OF SERVICE IN EVENT STREAM PROCESSING MIDDLEWARE - Computer implemented method, system and computer usable program code for achieving transactional quality of service in a transactional object store system. A transaction is received from a client and is executed, wherein the transaction comprises reading a read-only derived object, or reading or writing another object, and ends with a decision to request committing the transaction or a decision to request aborting the transaction. Responsive to a decision to request committing the transaction, wherein the transaction comprises writing a publishing object, events are delivered to event stream processing queries, and are executed in parallel with executing of the transaction. Responsive to a decision to request committing a transaction that comprises reading a read-only derived object, a validation is performed to determine whether the transaction can proceed to be committed, whether the transaction should abort, or whether the validation should delay waiting for one or more event stream processing queries to complete. | 02-18-2010 |
20100058344 | ACCELERATING A QUIESCENCE PROCESS OF TRANSACTIONAL MEMORY - A method to perform validation of a read set of a transaction is presented. In one embodiment, the method compares a read signature of a transaction to a plurality of write signatures associated with a plurality of transactions. The method determines based on the result of comparison, whether to update a local value of the transaction to a commit value of another transaction from the plurality of the transactions. | 03-04-2010 |
20100058345 | AUTOMATIC AND DYNAMIC DETECTION OF ANOMOLOUS TRANSACTIONS - Anomalous transactions are identified and reported. Transactions are monitored from the server at which they are performed. A baseline is dynamically determined for transaction performance based on recent performance data for the transaction. The more recent performance data may be given a greater weight than less recent performace data. Anomalous transactions are then identified based on comparing the actual transaction performance to the baseline for the transaction. An agent installed on an application server performing the transaction receives monitoring data, determines baseline data, and identifies anomalous transactions. For each anomalous transaction, transaction performance data and other data is reported. | 03-04-2010 |
20100070974 | SUPPORT APPARATUS FOR INFORMATION PROCESSING APPARATUS, SUPPORT METHOD AND COMPUTER PROGRAM - A support apparatus that supports an information processing apparatus is provided. The support apparatus comprising: a storage unit configured to associate and store settings of an executed job, a leakage amount of a memory leak, and a peak amount of memory; an acquisition unit configured to acquire a job group and settings for executing each job; a prediction unit configured to compare the settings stored in the storage unit and the settings acquired by the acquisition unit, and predict a leakage amount and a peak amount when the job is executed by the information processing apparatus; and a determination unit configured to determine whether there is a job in the job group in which a total value of the predicted peak amount of the job and the predicted leakage amount of a job executed preceding the job exceeds a memory capacity of the information processing apparatus. | 03-18-2010 |
20100077398 | Using Idempotent Operations to Improve Transaction Performance - An apparatus for optimizing a transaction comprising an initial sequence of computer operations, the apparatus includes a processing unit which identifies one or more idempotent operations comprised within the initial sequence, and which reorders the initial sequence to form a reordered sequence comprising a first sub-sequence of the computer operations followed by a second sub-sequence of the computer operations, the second sub-sequence comprising only the one or more idempotent operations. | 03-25-2010 |
20100083255 | NOTIFICATION BATCHING BASED ON USER STATE - Batching messages such as notifications intended for a user to preserve battery life on a computing device associated with the user. A server such as a proxy server receives the messages from one or more service providers. The proxy server maintains a state of the user. If the state indicates that the user is idle, the messages are stored at the proxy server unless the messages correspond to activating messages. The activating messages are sent to the user upon receipt. The stored messages are sent when the state changes to an active state or when a defined duration of time elapses. In some embodiments, the messages are presence notifications in an instant messaging session on a mobile computing device. By reducing the frequency of sent notifications, the battery life of the mobile computing device is preserved. | 04-01-2010 |
20100083256 | TEMPORAL BATCHING OF I/O JOBS - Batching techniques are provided to maximize the throughput of a hardware device based on the saturation point of the hardware device. A balancer can determine the saturation point of the hardware device and determine the estimated time cost for IO jobs pending in the hardware device. A comparison can be made and if the estimated time cost total is lower than the saturation point one or more IO jobs can be sent to the hardware device. | 04-01-2010 |
20100083257 | ARRAY OBJECT CONCURRENCY IN STM - A software transactional memory system is provided that creates an array of transactional locks for each array object that is accessed by transactions. The system divides the array object into non-overlapping portions and associates each portion with a different transactional lock. The system acquires transactional locks for transactions that access corresponding portions of the array object. By doing so, different portions of the array object can be accessed by different transactions concurrently. The system may use a shared shadow or undo copy for accesses to the array object. | 04-01-2010 |
20100088702 | CHECKING TRANSACTIONAL MEMORY IMPLEMENTATIONS - A transactional memory implementation is tested using an automatically generated test program and a locking memory model implementation which defines atomicity semantics. Schedules of the test program specify different interleavings of read operations and write operations of the test program threads. Executing the schedules under the locking memory model implementation provides legal final states of the shared variable(s). Executing the schedules under the transactional memory implementation produces candidate final states of the shared variable(s). If the candidate final states are also legal final states, then the transactional memory implementation passes the test. | 04-08-2010 |
20100088703 | Multi-core system with central transaction control - There is provided a multi-core system that includes a lower-subsystem including a first processor and a number of slave processing cores. Each of the slave processing cores can be a coprocessor or a digital signal processor. The first processor is configured to control processing on the slave processing cores and includes a system dispatcher configured to control transactions for execution on the slave processing cores. The system dispatcher is configured to generate the transactions to be executed on the slave processing cores. The first processor can include a number of hardware drivers for receiving the transactions from the system dispatcher and providing the transactions to the slave processing cores for execution. The multi-core system can further include an upper sub-system in communication with the lower-subsystem and including a second processor configured to provide protocol processing. | 04-08-2010 |
20100100882 | INFORMATION PROCESSING APPARATUS AND CONTROL METHOD THEREOF - When a plurality of objects are subjected to a batch processing by an object selection unit and a batch processing execution unit, if an input is made to an object included in the plurality of objects, an information processing apparatus controls the processing execution unit so as to execute a processing on the object based on the input, thereby executing a processing of moving all of the selected plurality of objects simultaneously with a processing of moving an arbitrary object separately from other objects among the selected plurality of objects. | 04-22-2010 |
20100115519 | METHOD AND SYSTEM FOR SCHEDULING IMAGE ACQUISITION EVENTS BASED ON DYNAMIC PROGRAMMING - A method and system for scheduling events into a set of opportunities is presented. The method includes 1) dividing a path of an image acquisition device so that there is at least a first portion and a second portion at any given moment, wherein each of the first portion and the second portion includes at least one state and the first portion includes a null state in which no image is taken; 2) combining each state in the first portion with at least one state in the second portion one by one to generate a series of updated sequences; and 3) selecting at least one of the updated sequences based on a merit value associated with each of the updated sequences. The invention uses only two groups out of all the relevant opportunities for most calculations, and is especially applicable to situations like satellite pass scheduling. | 05-06-2010 |
20100115520 | COMPUTER SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR MANAGING BATCH JOB - A computer system for managing batch jobs is described. The computer system includes a storage unit for storing at least one job template, and an execution unit for creating or updating a job net definition following a condition defined in the at least one job template, creating or updating a job net, or executing a discovery of a job conflict using at least one attribute or relationship in a set of data including at least one predetermined attribute of an configuration item, and a relationship between the configuration item and another configuration item, the set of data being stored in a repository and updatable through a discovery for detecting information about a configuration item. The present invention further provides a method and computer program product for managing batch jobs. | 05-06-2010 |
20100122254 | BATCH AND APPLICATION SCHEDULER INTERFACE LAYER IN A MULTIPROCESSOR COMPUTING ENVIRONMENT - A multiprocessor computer system batch system interface between an application level placement scheduler and one or more batch systems comprises a predefined protocol operable to convey processing node resource request and availability data between the application level placement scheduler and the one or more batch systems. | 05-13-2010 |
20100131953 | Method and System for Hardware Feedback in Transactional Memory - Multi-threaded, transactional memory systems may allow concurrent execution of critical sections as speculative transactions. These transactions may abort due to contention among threads. Hardware feedback mechanisms may detect information about aborts and provide that information to software, hardware, or hybrid software/hardware contention management mechanisms. For example, they may detect occurrences of transactional aborts or conditions that may result in transactional aborts, and may update local readable registers or other storage entities (e.g., performance counters) with relevant contention information. This information may include identifying data (e.g., information outlining abort relationships between the processor and other specific physical or logical processors) and/or tallied data (e.g., values of event counters reflecting the number of aborted attempts by the current thread or the resources consumed by those attempts). This contention information may be accessible by contention management mechanisms to inform contention management decisions (e.g. whether to revert transactions to mutual exclusion, delay retries, etc.). | 05-27-2010 |
20100138836 | System and Method for Reducing Serialization in Transactional Memory Using Gang Release of Blocked Threads - Transactional Lock Elision (TLE) may allow multiple threads to concurrently execute critical sections as speculative transactions. Transactions may abort due to various reasons. To avoid starvation, transactions may revert to execution using mutual exclusion when transactional execution fails. Because threads may revert to mutual exclusion in response to the mutual exclusion of other threads, a positive feedback loop may form in times of high congestion, causing a “lemming effect”. To regain the benefits of concurrent transactional execution, the system may allow one or more threads awaiting a given lock to be released from the wait queue and instead attempt transactional execution. A gang release may allow a subset of waiting threads to be released simultaneously. The subset may be chosen dependent on the number of waiting threads, historical abort relationships between threads, analysis of transactions of each thread, sensitivity of each thread to abort, and/or other thread-local or global criteria. | 06-03-2010 |
20100146509 | SELECTION OF TRANSACTION MANAGERS BASED ON TRANSACTION METADATA - One or more transaction managers are automatically selected from a plurality of transaction managers for use in processing a transaction. The selection is based on types of resources used by the transaction and supported resource types of the transaction managers. The selection of the one or more transaction managers enables less than all of the transaction managers of an application server to be used in transaction commit processing, thereby improving performance. | 06-10-2010 |
20100146510 | Automated Scheduling of Mass Data Run Objects - Techniques are described in which indication of a computer application to be configured for use in a particular business enterprise is received. A mass data run object is identified. The mass data run object defines a computer operation to be performed by the computer application to transform business transaction data as part of a business process. The mass data run object identifies i) selection parameters to select business transaction data to be transformed by the computer operation defined by the mass data run object and ii) instructions, that when executed, perform the computer operation to transform the selected business transaction data. A mass data run object instance corresponding to the identified mass data run object is generated and scheduled for execution. | 06-10-2010 |
20100153952 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR MANAGING BATCH OPERATIONS IN AN ENTERPRISE DATA INTEGRATION PLATFORM ENVIRONMENT - Methods, system, and computer program products for managing batch operations are provided. A method includes defining a window of time in which a batch will run by entering a batch identifier into a batch table, the batch identifier specifying a primary key of the batch table and is configured as a foreign key to a batch schedule table. The time is entered into the batch schedule table. The method further includes entering extract-transform-load (ETL) information into the batch table. The ETL information includes a workflow identifier, a parameter file identifier, and a location in which the workflow resides. The method includes retrieving the workflow from memory via the workflow identifier and location, retrieving the parameter file, and processing the batch, according to the process, workflow, and parameter file. | 06-17-2010 |
20100153953 | UNIFIED OPTIMISTIC AND PESSIMISTIC CONCURRENCY CONTROL FOR A SOFTWARE TRANSACTIONAL MEMORY (STM) SYSTEM - A method and apparatus for unified concurrency control in a Software Transactional Memory (STM) is herein described. A transaction record associated with a memory address referenced by a transactional memory access operation includes optimistic and pessimistic concurrency control fields. Access barriers and other transactional operations/functions are utilized to maintain both fields of the transaction record, appropriately. Consequently, concurrent execution of optimistic and pessimistic transactions is enabled. | 06-17-2010 |
20100162245 | RUNTIME TASK WITH INHERITED DEPENDENCIES FOR BATCH PROCESSING - A batch job processing architecture that dynamically creates runtime tasks for batch job execution and to optimize parallelism. The task creation can be based on the amount of processing power available locally or across batch servers. The work can be allocated across multiple threads in multiple batch server instances as there are available. A master task splits the items to be processed into smaller parts and creates a runtime task for each. The batch server picks up and executes as many runtime tasks as the server is configured to handle. The runtime tasks can be run in parallel to maximize hardware utilization. Scalability is provided by splitting runtime task execution across available batch server instances, and also across machines. During runtime task creation, all dependency and batch group information is propagated from the master task to all runtime tasks. Dependencies and batch group configuration are honored by the batch engine. | 06-24-2010 |
20100162246 | USE OF ANALYTICS TO SCHEDULE, SUBMIT OR MONITOR PROCESSES OR REPORTS - Embodiments of the invention provide for executing a batch process on a repository of information. According to one embodiment, executing a batch process can comprise presenting one or more aspects of records of the repository and receiving a selection of a criteria for at least one aspect of the records. Records matching the selected criteria can be identified and a summary of the information can be presented. The batch process can comprise one of a plurality of batch processes. In such a case, a selection of the batch process can be received and parameters of the batch process can be populated with the selected criteria. The batch process can then be executed with the parameters. For example, executing the batch process can comprise generating a report based on the parameters and the records of the repository. | 06-24-2010 |
20100162247 | METHODS AND SYSTEMS FOR TRANSACTIONAL NESTED PARALLELISM - Methods and systems for executing nested concurrent threads of a transaction are presented. In one embodiment, in response to executing a parent transaction, a first group of one or more concurrent threads including a first thread is created. The first thread is associated with a transactional descriptor comprising a pointer to the parent transaction. | 06-24-2010 |
20100162248 | COMPLEX DEPENDENCY GRAPH WITH BOTTOM-UP CONSTRAINT MATCHING FOR BATCH PROCESSING - Architecture that includes a batch framework engine incorporated into the server and that supports a rich set of dependencies between tasks in a single batch job. A bottom-up approach is employed where analysis is performed if a task can run based on the parent tasks. The framework runs batch jobs without the need of a client, and provides the ability to create dependencies between tasks, which allow the execution of tasks in parallel or in sequence. Using an AND/OR relationship engine, a task can require that all parent tasks (logical AND) meet requirements to run, or that only one parent (logical OR) is required to meet its requirements in order to run. Clean-up or non-important tasks can have the a flag set where even if such tasks fail when executing, the batch job will ignore these tasks when defining the final status of the job. | 06-24-2010 |
20100162249 | OPTIMIZING QUIESCENCE IN A SOFTWARE TRANSACTIONAL MEMORY (STM) SYSTEM - A method and apparatus for optimizing quiescence in a transactional memory system is herein described. Non-ordering transactions, such as read-only transactions, transactions that do not access non-transactional data, and write-buffering hardware transactions, are identified. Quiescence in weak atomicity software transactional memory (STM) systems is optimized through selective application of quiescence. As a result, transactions may be decoupled from dependency on quiescing/waiting on previous non-ordering transaction to increase parallelization and reduce inefficiency based on serialization of transactions. | 06-24-2010 |
20100162250 | OPTIMIZATION FOR SAFE ELIMINATION OF WEAK ATOMICITY OVERHEAD - A method and apparatus for optimizing weak atomicity overhead is herein described. A state table is maintained either during static or dynamic compilation of code to track data non-transactionally accessed. Within execution of a transaction, such as at transactional memory accesses or within a commit function, it is determined if data associated with memory access within the transaction is to be conflictingly accessed outside the transaction from the state table. If the data is not accessed outside the transaction, then the transaction potentially commits without weak atomicity safety mechanisms, such as privatization. Furthermore, even if data is accessed outside the transaction, optimized safety mechanisms may be performed to ensure isolation between the potentially conflicting accesses, while eliding the mechanisms for data not accessed outside the transaction. | 06-24-2010 |
20100169886 | DISTRIBUTED MEMORY SYNCHRONIZED PROCESSING ARCHITECTURE - A data processing system comprises a plurality of processors, where each processor is coupled to a respective dedicated memory. The data processing system also comprises a voter module that is disposed between the plurality of processors and one or more peripheral devices such as a network interface, output device, input device, or the like. Each processor provides an I/O transaction to the voter module and the voter module determines whether a majority (or predominate) transaction is present among the I/O transactions received from each of the processors. If a majority transaction is present, the voter module releases the majority transaction to the peripheral. However, if no majority transaction is determined, the system outputs a no majority transaction signal (or raises an exception). Also, a processor error signal (or exception) is output for any processor providing an I/O transaction not corresponding to the majority transaction. The error signal may also optionaly prompt the recovery of any or all processors with methods such as but not limited to reboot/reset based upon predetermined or emergent criteria. | 07-01-2010 |
20100186014 | DATA MOVER FOR COMPUTER SYSTEM - In a computer system with a disk array that has physical storage devices arranged as logical storage units and is capable of carrying out hardware storage operations on a per logical storage unit basis, data movement operations can be carried out on a per-file basis. A data mover software component for use in a computer or storage system enables cloning and initialization of data to provide high data throughput without moving the data between the kernel and application levels. | 07-22-2010 |
20100186015 | METHOD AND APPARATUS FOR IMPLEMENTING A TRANSACTIONAL STORE SYSTEM USING A HELPER THREAD - A method, apparatus, and computer readable article of manufacture for executing a transaction by a processor apparatus that includes a plurality of hardware threads. The method includes the steps of: creating a main software thread for executing the transaction; creating a helper software thread for executing a barrier function; executing the main software thread and the helper software thread using the plurality of hardware threads; deciding whether the execution of the barrier function is required; executing the barrier function by the helper software thread; and returning to the main software thread. The step of executing the barrier function includes: stalling the main software thread; activating the helper software thread; and exiting the helper software thread in response to completion of the execution. | 07-22-2010 |
20100211952 | BUSINESS EVENT PROCESSING - Techniques for business event processing are presented. Producer services produce events that are managed and distributed by a transport service. Consumer services acquire events from the transport service and perform actions in response to the events. The production, distribution, and processing of the events and actions may be asynchronously and concurrently performed. | 08-19-2010 |
20100269114 | INSTANT MESSENGER AND METHOD FOR DISPATCHING TASK WITH INSTANT MESSENGER - Embodiments of the present invention provide an Instant Messenger (IM) and a method for dispatching tasks by the IM. The method includes: presetting task information in a start-up program configuration table, and dispatching, by the IM, tasks in batches according to the task information in the start-up program configuration table. Preferably, the task information includes the execution delay information and priority information of the tasks. The IM includes a logging-on flow management module and a task dispatching management module. The logging-on flow management module is adapted to store the start-up program configuration table, which is configured with the task information. The task dispatching management module is adapted to dispatch the tasks in batches according to the task information in the start-up program configuration table. With embodiments of the invention, the start-up delay of the IM may be reduced. | 10-21-2010 |
20100275207 | GATHERING STATISTICS IN A PROCESS WITHOUT SYNCHRONIZATION - Each processing resource in a scheduler of a process executing on a computer system maintains counts of the number of tasks that arrive at the processing resource and the number of tasks that complete on the processing resource. The counts are maintained in storage that is only writeable by the corresponding processing resource. The scheduler collects and sums the counts from each processing resource and provides statistics based on the summed counts and previous summed counts to a resource manager in response to a request from the resource manager. The scheduler does not reset the counts when the counts are collected and stores copies of the summed counts for use with the next request from the resource manager. The counts may be maintained without synchronization and with thread safety to minimize the impact of gathering statistics on the application. | 10-28-2010 |
20100287553 | SYSTEM, METHOD, AND SOFTWARE FOR CONTROLLED INTERRUPTION OF BATCH JOB PROCESSING - This disclosure provides various embodiments of software, systems, and techniques for controlled interruption of batch job processing. In one instance, a tangible computer readable medium stores instructions for managing batch jobs, where the instructions are operable when executed by a processor to identify an interruption event associated with a batch job queue. The instructions trigger an interruption of an executing batch job within the job queue such that the executed portion of the job is marked by a restart point embedded within the executable code. The instructions then restart the interrupted batch job at the restart point. | 11-11-2010 |
20100287554 | PROCESSING SERIALIZED TRANSACTIONS IN PARALLEL WHILE PRESERVING TRANSACTION INTEGRITY - A method, system, and apparatus are disclosed for processing serialized transactions in parallel while preserving transaction integrity. The method includes receiving a transaction comprising at least two keys and accessing a serialization-independent key (“SI-Key”) and a serialization-dependent key (“SD-Key”) from the transaction. A value for the SI-Key identifies the transaction as independent of transactions having a different value for the SI-Key. Furthermore, a value for the SD-Key governs a transaction execution order for each transaction having a SI-Key value that matches the SI-Key value associated with the SD-Key value. The method also includes assigning the transaction to an execution group based on a value for the SI-Key. The method also includes scheduling the one or more transactions in the execution group in an order defined by the SD-Key. The execution group may execute in parallel with one or more additional execution groups. | 11-11-2010 |
20100306776 | DATA CENTER BATCH JOB QUALITY OF SERVICE CONTROL - A machine-controlled method can include determining an extended interval quality of service (QoS) specification for a batch job and determining a remaining data center resource requirement for the batch job based on the extended interval QoS specification. The machine-controlled method can also include determining an immediate QoS specification for the batch job based on the remaining data center resource requirement. | 12-02-2010 |
20100325630 | PARALLEL NESTED TRANSACTIONS - A system for managing transactions, including a first reference cell associated with a starting value for a first variable, a first thread having an outer atomic transaction including a first instruction to write a first value to the first variable, a second thread, executing in parallel with the first thread, having an inner atomic transaction including a second instruction to write a second value to the first variable, where the inner atomic transaction is nested within the outer atomic transaction, a first value node created by the outer atomic transaction and storing the first value in response to execution of the first instruction, and a second value node created by the inner atomic transaction, storing the second value in response to execution of the second instruction, and having a previous node pointer referencing the first value node. | 12-23-2010 |
20100333093 | FACILITATING TRANSACTIONAL EXECUTION THROUGH FEEDBACK ABOUT MISSPECULATION - One embodiment provides a system that facilitates the execution of a transaction for a program in a hardware-supported transactional memory system. During operation, the system records a misspeculation indicator of the transaction during execution of the transaction using hardware transactional memory mechanisms. Next, the system detects a transaction failure associated with the transaction. Finally, the system provides the recorded misspeculation indicator to the program to facilitate a response to the transaction failure by the program. | 12-30-2010 |
20110016470 | Transactional Conflict Resolution Based on Locality - Mechanisms are provided for handling conflicts in a transactional memory system. The mechanisms execute threads in a data processing system in a first conflict resolution mode of operation in which threads execute conflicting transactional blocks speculatively. The mechanisms determine, for a transactional block, if the first conflict resolution mode of operation is to be transitioned to a second conflict resolution mode of operation in which threads accessing conflicting transactional blocks are executed serially and non-speculatively. Moreover, the mechanisms execute a thread that accesses the transactional block using the second conflict resolution mode of operation in response to the determination indicating that the first conflict resolution mode of operation is to be transitioned to the second conflict resolution mode of operation. | 01-20-2011 |
20110023037 | APPLICATION SELECTION OF MEMORY REQUEST SCHEDULING - The present disclosure generally describes systems, methods and devices for operating a computer system with memory based scheduling. The computer system may include one or more of an application program and a memory controller in communication with memory banks. The memory controller may include a scheduler for scheduling requests. The application program may select a scheduling algorithm for scheduling requests from a plurality of scheduling algorithms. The application program may instruct the scheduler to schedule requests using the selected scheduling algorithm. | 01-27-2011 |
20110023038 | BATCH SCHEDULING WITH SEGREGATION - In accordance with the disclosed subject matter there are described techniques for segregating requests issued by threads running in a computer system. | 01-27-2011 |
20110035748 | DATA PROCESSING METHOD, DATA PROCESSING PROGRAM, AND DATA PROCESSING SYSTEM - An execution system executes an update batch according to an update batch execution request from a terminal device and gives a batch execution command to each standby system. Each system stores the content of updated data in its update buffer; and subject to termination of the update batch by each system, the post-update data content is reflected in a database. While the above processing is performed, the execution system and the standby systems accept a reference request from the terminal device; and in a case of “batch not executed” or “batch in execution”, each system searches the database and then returns the pre-update data content to the terminal device; and in a case of “update content being reflected”, each system searches the database or the update buffer and then returns the post-update data content to the terminal device. | 02-10-2011 |
20110055834 | Enrollment Processing - A system for enrollment processing optimization for controlling batch job processing traffic transmitted to a mainframe computer includes an enrollment data input operations system operatively coupled to the mainframe computer and configured to provide a universal front end for data entry of enrollment information. Enrollment records based on the enrollment information is then created. A database system stores the enrollment records, and a workflow application module operatively coupled to the database system is configured to manage processing of the enrollment records and manage transmission of the enrollment records to the mainframe computer for batch processing. A batch throttling control module operatively coupled to the workflow application module and to the mainframe computer controls the rate and the number of enrollment records transmitted by the workflow application module to the mainframe computer for batch processing. | 03-03-2011 |
20110055835 | AIDING RESOLUTION OF A TRANSACTION - A method for aiding resolution of a transaction for use with a transactional processing system comprising a transaction coordinator and a plurality of grouped and inter-connected resource managers, the method comprising the steps of: in response to a communications failure between the transaction coordinator and a first resource manager causing a transaction to have an in-doubt state, connecting, by the transaction coordinator, to a second resource manager; in response to the connecting step, sending by the transaction coordinator to the second resource manager, a resolve request comprising a resolution for the in-doubt transaction; in response to the resolve request, obtaining at the first resource manager, by the second resource manager, a lock to data associated with the in-doubt transaction; and in response to the obtaining step, determining, by the second resource manager, whether the transaction is associated with the first resource manager. | 03-03-2011 |
20110055836 | METHOD AND DEVICE FOR REDUCING POWER CONSUMPTION IN APPLICATION SPECIFIC INSTRUCTION SET PROCESSORS - A method and device for converting first program code into second program code, such that the second program code has an improved execution on a targeted programmable platform, is disclosed. In one aspect, the method includes grouping operations on data for joint execution on a functional unit of the targeted platform, scheduling operations on data in time, and assigning operations to an appropriate functional unit of the targeted platform. Detailed word length information, rather than the typically used approximations like powers of two, may be used in at least one of the grouping, scheduling or assigning operations. | 03-03-2011 |
20110055837 | HYBRID HARDWARE AND SOFTWARE IMPLEMENTATION OF TRANSACTIONAL MEMORY ACCESS - Embodiments of the invention relate a hybrid hardware and software implementation of transactional memory accesses in a computer system. A processor including a transactional cache and a regular cache is utilized in a computer system that includes a policy manager to select one of a first mode (a hardware mode) or a second mode (a software mode) to implement transactional memory accesses. In the hardware mode the transactional cache is utilized to perform read and write memory operations and in the software mode the regular cache is utilized to perform read and write memory operations. | 03-03-2011 |
20110067028 | DISTRIBUTED SERVICE POINT TRANSACTION SYSTEM - A device for processing electronic transactions is disclosed. The device includes a processor configured to receive, from a client processing device, a request for information to complete an electronic transaction by a user at an access device affiliated with an educational institution. The processor is further configured to transmit, to the client processing device, a response to the request, the response configured to be transmitted by the client processing device to the access device. The request for information is triggered at the access device by an identification carrier. The response to the request includes at least one of a permission or denial whether to provide, to the user, access to an educational space or item, access to electronic educational information, or determining at least one of the price and availability of an educational item to the user. A client-side device is also disclosed. Methods and machine-readable mediums are also disclosed. | 03-17-2011 |
20110078685 | SYSTEMS AND METHODS FOR MULTI-LEG TRANSACTION PROCESSING - Embodiments of the invention broadly contemplate systems, methods and arrangements for processing multi-leg transactions. Embodiments of the invention process multi-leg transactions while allowing later arrived orders to get processed during the time when an earlier, tradable multi-leg transaction is pending using a look-ahead mechanism without violating any relevant timing or exchange rules. | 03-31-2011 |
20110078686 | METHODS AND SYSTEMS FOR HIGHLY AVAILABLE COORDINATED TRANSACTION PROCESSING - Embodiments of the invention provide a coordinated transaction processing system capable of providing primary-primary high availability as well as minimal response time to queries via utilization of a virtual reply system between partner nodes. One or more global queues ensure peer nodes are synchronized. | 03-31-2011 |
20110078687 | SYSTEM AND METHOD FOR SUPPORTING RESOURCE ENLISTMENT SYNCHRONIZATION - A system uses a transaction manager for supporting resource enlistment synchronization on an application server with a plurality of threads. This system also includes a plurality of wrapper objects, each of which wrapper object wraps a resource object associated with the application server. Upon receiving a request from a thread to enlist a resource object in a transaction, the transaction manager first checks with a wrapper object that wraps the resource object to see if there is a lock being held on the resource object by another said thread in another said transaction. If there is a lock, the transaction manager allows the thread to wait and signal the thread once the lock is freed by another said thread in another said transaction. Otherwise, the transaction manager grants a lock to the thread and holds the lock until an owner of the thread delists the resource object. | 03-31-2011 |
20110093854 | SYSTEM COMPRISING A PLURALITY OF PROCESSING UNITS MAKING IT POSSIBLE TO EXECUTE TASKS IN PARALLEL, BY MIXING THE MODE OF EXECUTION OF CONTROL TYPE AND THE MODE OF EXECUTION OF DATA FLOW TYPE - A system including a plurality of processing units for executing tasks in parallel and a communication network. The processing units are organized into clusters of units, each cluster comprising a local memory. The system includes means for statically allocating tasks to each cluster of units, so that a task of an application is processed by the same cluster of units from one execution to another. Each cluster includes cluster management means for allocating tasks to each of its processing units and space in the local memory for executing them, so that a given task of an application may not be processed by the same processing unit from one execution to another. The cluster management means includes means for managing the tasks, means for managing the processing units, means for managing the local memory and means for managing the communications involving its processing units. The management means operate simultaneously and cooperatively. | 04-21-2011 |
20110093855 | MULTI-THREAD REPLICATION ACROSS A NETWORK - A replicated set of data is processed by receiving at a target, from one of a plurality of replication processing threads, a received batch of one or more non-synchronization tasks. It is determined that the received batch comprises a next batch to be performed at the target and the non-synchronization tasks included in the batch are performed in a task order. | 04-21-2011 |
20110131579 | BATCH JOB MULTIPLEX PROCESSING METHOD - A batch job multiplex processing method which solves the problem that a system which performs multiplex processing including parallel processing on plural nodes cannot cope with a sudden increase in the volume of data to be batch-processed using a predetermined value of multiplicity, for example, in securities trading in which the number of transactions may suddenly increase on a particular day. The method dynamically determines the value of multiplicity of processing including parallel processing in execution of a batch job on plural nodes. More specifically, in the method, multiplicity is determined depending on the node status (node performance and workload) and the status of an input file for the batch job. | 06-02-2011 |
20110154341 | SYSTEM AND METHOD FOR A TASK MANAGEMENT LIBRARY TO EXECUTE MAP-REDUCE APPLICATIONS IN A MAP-REDUCE FRAMEWORK - An improved system and method for a task management library to execute map-reduce applications is provided. A map-reduce application may be operably coupled to a task manager library and a map-reduce library on a client device. The task manager library may include a wrapper application programming interface that provides application programming interfaces invoked by a wrapper to parse data input values of the map-reduce application. The task manager library may also include a configurator that extracts data and parameters of the map-reduce application from a configuration file to configure the map-reduce application for execution, a scheduler that determines an execution plan based on input and output data dependencies of mappers and reducers, a launcher that iteratively launches the mappers and reducers according to the execution plan, and a task executor that requests the map-reduce library to invoke execution of mappers on mapper servers and reducers on reducer servers. | 06-23-2011 |
20110154342 | METHOD AND APPARATUS FOR PROVIDING REMINDERS - A method and computing device for providing task reminder data associated with event data stored in a database is provided. The computing device comprises a processing unit interconnected with a memory device. A list of tasks associated with the event data is received, each respective task in the list of tasks associated with task data. Respective reminder times for each task are determined at the processing unit, such that a display device can be controlled to provide respective representations of the task data, in association with the event data, at respective times substantially similar to each respective reminder time. The list of tasks is stored in the database in association with the event data. Input data is received, indicative that at least one of a start time and an end time of an event associated with the event data has changed to a respective new start time and new end time. For each task in the list of tasks, a given respective reminder time is changed to a new given respective reminder time based on at least one of the new start time and the new end time when the given respective reminder time comprises a time relative to at least one of the start time and the end time. | 06-23-2011 |
20110161959 | Batch Job Flow Management - Systems and methods for improved batch flow management are described. At least some embodiments include a computer system for managing a job flow including a memory storing a plurality of batch queue jobs grouped into Services each including a job and a predecessor job. A time difference is the difference between a scheduled job start time and an estimated predecessor job end time. Jobs with a preceding time gap include jobs immediately preceded only by non-zero time differences. The job start depends upon the predecessor job completion. The computer system further includes a processing unit that identifies jobs preceded by a time gap, selects one of the Services, and traverses in reverse chronological order a critical path of dependent jobs within the Service until a latest job with a preceding time gap is identified or at least those jobs along the critical path preceded by another job are traversed. | 06-30-2011 |
20110173619 | Apparatus and method for optimized application of batched data to a database - A computer readable medium storing executable instructions includes executable instructions to: receive a continuous stream of database transactions; form batches of database transactions from the continuous stream of database transactions; combine batches of database transactions with similar operations to form submission groups; identify dependencies between submission groups to designate priority submission groups; and apply priority submission groups to a database target substantially synchronously with the receipt of the continuous stream of database transactions. | 07-14-2011 |
20110185359 | Determining A Conflict in Accessing Shared Resources Using a Reduced Number of Cycles - Illustrated is a system and method for identifying a potential conflict, using a conflict determination engine, between a first transaction and a second transaction stored in a conflict hash map, the potential conflict based upon a potential accessing of a shared resource common to both the first transaction and the second transaction. The system and method further includes determining an actual conflict, using the conflict determination engine to access the combination of the conflict hash map and the read set hash map, between the first transaction and the second transaction, where a time stamp value of only selected shared locations has changed relative to a previous time stamp value, the time stamp value stored in the read set hash map and accessed using the first transaction. | 07-28-2011 |
20110185360 | MULTIPROCESSING TRANSACTION RECOVERY MANAGER - A multiprocessing transaction recovery manager, operable with a transactional application manager and a resource manager, comprises a threadsafety indicator for receiving and storing positive and non-positive threadsafety data of at least one transactional component managed by one of the transactional application manager and the resource manager; a commit protocol component for performing commit processing for the at least one transactional component; and a thread selector responsive to positive threadsafety data for selecting a single thread for the commit processing to be performed by the commit protocol component. The thread selector is further operable to select plural threads for the commit processing to be performed by the commit protocol component responsive to non-positive threadsafety data. | 07-28-2011 |
20110197194 | TRANSACTION-INITIATED BATCH PROCESSING - A system and method is provided for initiating batch processing on a computer system from a terminal. The method generates a message from the terminal, where the message defines a transaction to be performed on a computer system. The transaction schedules and runs a program that extracts data from the message. The message is then transmitted to the computer system. The data is then used to generate batch job control language and a batch job is run on the computer system. The output of the batch job is then routed back to the terminal. | 08-11-2011 |
20110209151 | AUTOMATIC SUSPEND AND RESUME IN HARDWARE TRANSACTIONAL MEMORY - An apparatus and method is disclosed for a computer processor configured to access a memory shared by a plurality of processing cores and to execute a plurality of memory access operations in a transactional mode as a single atomic transaction and to suspend the transactional mode in response to determining an implicit suspend condition, such as a program control transfer. As part of executing the transaction, the processor marks data accessed by the speculative memory access operations as being speculative data. In response to determining a suspend condition (including by detecting a control transfer in an executing thread) the processor suspends the transactional mode of execution, which includes setting a suspend flag and suspending marking speculative data. If the processor later detects a resumption condition (e.g., a return control transfer corresponding to a return from the control transfer), the processor is configured to resume the marking of speculative data. | 08-25-2011 |
20110225586 | Intelligent Transaction Merging - An apparatus and methods are disclosed for intelligently determining when to merge transactions to backup storage. In particular, in accordance with the illustrative embodiment, queued transactions may be merged based on a variety of criteria, including, but not limited to, one or more of the following: the number of queued transactions; the rate of growth of the number of queued transactions; the calendrical time; estimates of the time required to execute the individual transactions; a measure of importance of the individual transactions; the transaction types of the individual transactions; a measure of importance of one or more data updated by the individual transactions; a measure of availability of one or more resources; a current estimate of the time penalty associated with shadowing a page of memory; and the probability of rollback for the individual transactions, and for the merged transaction. | 09-15-2011 |
20110231848 | FORECASTING SYSTEMS AND METHODS - Improved methods and systems are provided for asynchronously updating forecast rollup numbers. The asynchronousity is achieved by decoupling the source data change from further manipulations of the source data, for example in calculating and updating forecast rollup numbers by user role hierarchy, layer by layer. An event message queue implementation can be used for asynchronous processing. The process works by dequeuing a batch of event messages and then deduping and sorting them before applying forecast logic. Forecast numbers are updated based on target data and then rolled up the user role levels by aggregating forecast numbers for all subordinate forecast data entries. | 09-22-2011 |
20110246993 | System and Method for Executing a Transaction Using Parallel Co-Transactions - The transactional memory system described herein may implement parallel co-transactions that access a shared memory such that at most one of the co-transactions in a set will succeed and all others will fail (e.g., be aborted). Co-transactions may improve the performance of programs that use transactional memory by attempting to perform the same high-level operation using multiple algorithmic approaches, transactional memory implementations and/or speculation options in parallel, and allowing only the first to complete to commit its results. If none of the co-transactions succeeds, one or more may be retried, possibly using a different approach and/or transactional memory implementation. The at-most-one property may be managed through the use of a shared “done” flag. Conflicts between co-transactions in a set and accesses made by transactions or activities outside the set may be managed using lazy write ownership acquisition and/or a priority-based approach. Each co-transaction may execute on a different processor resource. | 10-06-2011 |
20110252426 | PROCESSING BATCH TRANSACTIONS - A batch data stream, which comprises inputs to a serial batch application program, is received. Batch code from the serial batch application program is translated into parallel code that is executable in parallel by multiple execution units. Checkpoints are applied to the batch data stream that has been received, and data between the checkpoints defines multiple threads. The multiple threads are stored in an input queue that feeds data inputs to multiple execution units. The parallel code is then executed in the multiple execution units by using the multiple threads as inputs. | 10-13-2011 |
20110258630 | METHODS AND SYSTEMS FOR BATCH PROCESSING IN AN ON-DEMAND SERVICE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for batch processing in an on-demand service environment. For example, in one embodiment, mechanisms include receiving a processing request for a multi-tenant database, in which the processing request specifies processing logic and a processing target group within the multi-tenant database. Such an embodiment further includes dividing or chunking the processing target group into a plurality of processing target sub-groups, queuing the processing request with a batch processing queue for the multi-tenant database among a plurality of previously queued processing requests, and releasing each of the plurality of processing target sub-groups for processing in the multi-tenant database via the processing logic at one or more times specified by the batch processing queue. | 10-20-2011 |
20110271282 | Multi-Threaded Sort of Data Items in Spreadsheet Tables - To sort data items in a spreadsheet table, data items in the spreadsheet table are divided into a plurality of blocks. Multiple threads are used to sort the data items in the blocks. After the data items in the blocks are sorted, multiple merge threads are used to generate a final result block. The final result block contains each of the data items in the spreadsheet table. Each of the merge threads is a thread that merges two source blocks to generate a result block. Each of the source blocks is either one of the sorted blocks or one of the result blocks generated by another one of the merge threads. A sorted version of the spreadsheet table is then displayed. The data items in the sorted version of the spreadsheet table are ordered according to an order of the data items in the final result block. | 11-03-2011 |
20110283283 | DETERMINING MULTIPROGRAMMING LEVELS - A method of managing the execution of a workload of transactions of different transaction types on a computer system. Each transaction type may have a different resource requirement. The method may include intermittently, during execution of the workload, determining the performance of each transaction type. A determination may be made of whether if there is an overloaded transaction type in which performance is degraded with an increase in the number of transactions of the transaction type. If there is an overloaded transaction type, the number of transactions of at least one transaction type may be changed. | 11-17-2011 |
20110296419 | EVENT-BASED COORDINATION OF PROCESS-ORIENTED COMPOSITE APPLICATIONS - A process model specified using, for example, UML activity diagrams can be translated into an event-based model that can be executed on top of a coordination middleware. For example, a process model may be encoded as a collection of coordinating objects that interact with each other through a coordination middleware including a shared memory space. This approach is suitable for undertaking post-deployment adaptation of process-oriented composite applications. In particular, new control dependencies can be encoded by dropping new (or enabling existing) coordinating objects into the space and/or disabling existing ones. | 12-01-2011 |
20110307893 | ROLE-BASED AUTOMATION SCRIPTS - A computer performs an action called for by a script. The computer determines how to perform the action based in part on a role template not included in the script and based in part on a role-template extension included in the script. | 12-15-2011 |
20120005680 | PROCESSING A BATCHED UNIT OF WORK - A batched unit of work is associated with a plurality of messages for use with a data store. A backout count, associated with a number of instances that work in association with the batched unit of work, is backed out. A backout threshold is associated with the backout count. A commit count is associated with committing the batched unit of work in response to successful commits for a predefined number of the plurality of messages. A checker checks whether the backout count is greater than zero and less than the backout threshold. An override component, responsive to the backout count being greater than zero and less than the backout threshold, overrides the commit count and commits the batched unit of work for a subset of the plurality of messages. | 01-05-2012 |
20120030678 | Method and Apparatus for Tracking Documents - A method and apparatus are provided for tracking documents. The documents are tracked by simultaneously monitoring each document's electronic processing status and physical location. Determinations are made whether specific combinations of electronic processing states and physical locations are valid and whether specific movements of documents are permitted. Invalid combinations or movements are reported to a reporting station. The preparation of batches of documents prior to scanning may be monitored and operator metrics related to the batch prep process may be tracked. Exception documents rejected during document processing may be monitored to enable retrieval of such documents. | 02-02-2012 |
20120030679 | Resource Allocator With Knowledge-Based Optimization - An automated resource allocation technique for scheduling a batch computer job in a multi-computer system environment. According to example embodiments, resource allocation processing may be performed when receiving a batch computer job that needs to be run by a software application executable on more than one computing system in the multi-computer system environment. The job may be submitted for pre-processing analysis by the software application. A pre-processing analysis result comprising job evaluation information may be received from the software application and the result may be evaluated to select a computing system in the multi-computer system environment that is capable of executing the application to run the job. The job may be submitted to the selected computing system to have the software application run the job to completion. | 02-02-2012 |
20120042314 | METHOD AND DEVICE ENABLING THE EXECUTION OF HETEROGENEOUS TRANSACTION COMPONENTS - The invention especially relates to the execution of at least one transaction in a transaction processing system comprising a transaction-oriented monitor ( | 02-16-2012 |
20120072915 | Shared Request Grouping in a Computing System - A queuing module is configured to determine the presence of at least one shared request in a request queue, and in the event at least one shared request is determined to be present in the queue; determine the presence of a waiting exclusive request located in the queue after the at least one shared request, and in the event a waiting exclusive request is determined to be located in the queue after the at least one shared request: determine whether grouping a new shared request with the at least one shared request violates a deferral limit of the waiting exclusive request; and, in the event grouping the new shared request with the at least one shared request does not violate the deferral limit of the waiting exclusive request, group the new shared request with the at least one shared request. | 03-22-2012 |
20120102493 | ORDERED SCHEDULING OF SUSPENDED PROCESSES BASED ON RESUMPTION EVENTS - A method includes receiving a plurality of resumption events associated with a plurality of suspended processes. Each resumption event is associated with a suspended process. Each resumption event also includes an execution time and a resumption time window. The method includes determining resumption deadlines for the suspended processes and determining a resumption order based on the resumption deadlines. The resumption deadline for a suspended process is based on the execution time and the resumption time window of the corresponding resumption event. The suspended processes are scheduled for execution in accordance with the resumption order. | 04-26-2012 |
20120110582 | REAL-TIME COMPUTING RESOURCE MONITORING - Techniques used to enhance the execution of long-running or complex software application instances and jobs on computing systems are disclosed herein. In one embodiment, a real time, self-predicting job resource monitor is employed to predict inadequate system resources on the computing system and failure of a job execution on the computing system. This monitor may not only determine if inadequate resources exist prior to execution of the job, but may also detect in real time if inadequate resources will be encountered during the execution of the job for cases where resource availability has unexpectedly decreased. If a resource deficiency is predicted on the executing computer system, the system may pause the job and automatically take corrective action or alert a user. The job may resume after the resource deficiency is met. Additional embodiments also integrate this resource monitoring capability with the adaptive selection of a computer system or application execution environment based on resource capability predictions and benchmarks. | 05-03-2012 |
20120131582 | System and Method for Real-Time Batch Account Processing - The present disclosure discloses a technique for real-time batch account processing. In one aspect, a method includes: (1) receiving, by an account processing center, a marked request for batch processing; (2) caching the marked request; (3) pre-processing sub-requests of a type relating to an account that are in the marked request, including merging operations of a type for processing for the account; and (4) processing the marked request, including the pre-processed sub-requests, to provide a processing result to a corresponding client. The request for batch processing can be directly submitted at the client or submitted by a client through an interface that is provided to the client for submitting a request including the request for batch processing. When submitting the request for batch processing, the client can wait for the processing result online, and obtain the processing result at real-time. Further, when receiving the request for batching processing, the account processing center can pre-process it, e.g., merging operations for the same account, and thus increase efficiency of batch processing. | 05-24-2012 |
20120137297 | MODIFYING SCHEDULED EXECUTION OF OBJECT MODIFICATION METHODS ASSOCIATED WITH DATABASE OBJECTS - An original schedule module configured to receive an original schedule configured to trigger execution of a first original batch of entries including a set of object modification methods and a corresponding set of database objects before triggering execution of a second original batch of entries including a set of object modification methods and a corresponding set of database objects. An analysis module can be configured to determine logic for execution of each entry from the first original batch of entries based on the original schedule. A schedule generator can be configured to define, based on the logic for execution and based on the original schedule, a modified schedule configured to trigger parallel execution of a first modified batch of entries including less than all of the first original batch of entries, and a second modified batch of entries including less than all of the second original batch of entries. | 05-31-2012 |
20120151488 | Measuring Transaction Performance Across Application Asynchronous Flows - A mechanism modifies a deployment descriptor of each application component including at least one producer application component or consumer application component, by adding, for each producer application component or consumer application component, an application component identifier, a producer or consumer type, and a recipient identifier of a recipient the application component uses. Responsive to determining a match exists and the given application component is of producer type, the application server virtual machine logs an identifier of a recipient containing a message sent by the given application component, a correlation identifier of the given application component, and an execution start time. Responsive to determining a match exists and the given application component is of consumer type, the application server virtual machine logs an identifier of the recipient resource containing a message processed by the given application component, a correlation identifier of the given application component, and an execution end time. | 06-14-2012 |
20120159492 | REDUCING PROCESSING OVERHEAD AND STORAGE COST BY BATCHING TASK RECORDS AND CONVERTING TO AUDIT RECORDS - Systems, methods and articles of manufacture are disclosed for processing documents for electronic discovery. A request may be received to perform a task on documents, each document having a distinct document identifier. A task record may be generated to represent the requested task. The task record may include information specific to the request task. However, the task record need not include any document identifiers. At least one batch record may be generated that includes the document identifier for each of the documents. The task record may be associated with the at least one batch record. The requested task may be performed according to the task record and the at least one batch record. An audit record may be generated for the performed task. The audit record may be associated with the at least one batch record. | 06-21-2012 |
20120167097 | ADAPTIVE CHANNEL FOR ALGORITHMS WITH DIFFERENT LATENCY AND PERFORMANCE POINTS - A method for processing requests in a channel can include receiving a first request in the channel, running calculations on the first request in a processing time T | 06-28-2012 |
20120167098 | Distributed Transaction Management Using Optimization Of Local Transactions - A computer-implemented method, a computer program product, and a system are provided. A transaction master for each of a plurality of transactions of a database is provided. Each transaction master is configured to communicate with at least one transaction slave to manage execution a transaction in the plurality of transactions. A transaction token that specifies data to be visible for the transaction on the database is generated. The transaction token includes a transaction identifier for identifying whether the transaction is a committed transaction or an uncommitted transaction. The transaction master is configured to update the transaction token after execution of the transaction. A determination whether the transaction can be executed on the at least one transaction slave without accessing data specified by the transaction token is made. The transaction is executed on the at least one transaction slave using a transaction token stored at the at least one transaction slave. | 06-28-2012 |
20120167099 | Intelligent Retry Method Using Remote Shell - Method for issuing and monitoring a remote batch job, method for processing a batch job, and system for processing a remote batch job. The method for issuing and monitoring a remote batch job includes formatting a command to be sent to a remote server to include a sequence identification composed of an issuing server identification and a time stamp, forwarding the command from the issuing server to the remote server for processing, and determining success or failure of the processing of the command at the remote server. When the failure of the processing of the command at the remote server is determined, the method further includes instructing the remote server to retry the command processing. | 06-28-2012 |
20120174109 | PROCESSING A BATCHED UNIT OF WORK - A batched unit of work is associated with a plurality of messages for use with a data store. A backout count, associated with a number of instances that work in association with the batched unit of work, is backed out. A backout threshold is associated with the backout count. A commit count is associated with committing the batched unit of work in response to successful commits for a predefined number of the plurality of messages. A checker checks whether the backout count is greater than zero and less than the backout threshold. An override component, responsive to the backout count being greater than zero and less than the backout threshold, overrides the commit count and commits the batched unit of work for a subset of the plurality of messages. | 07-05-2012 |
20120180053 | CALL STACK AGGREGATION AND DISPLAY - A call stack aggregation mechanism aggregates call stacks from multiple threads of execution and displays the aggregated call stack to a user in a manner that visually distinguishes between the different call stacks in the aggregated call stack. The multiple threads of execution may be on the same computer system or on separate computer systems. | 07-12-2012 |
20120192187 | Customizing Automated Process Management - Embodiments of an event-driven process management and automation system are disclosed. Such system may be particularly appropriate for a multi-tenant environment so that a single process handling flow may be generated for a given process. Because in a multi-tenant environment many different entities may desire to customize or optimize this process handling flow for their particular usage, modifications to the process flow may be easily handled by a non-technical user to realize process modification without incurring additional development costs. Using a multi-level hierarchical inheritance model in accordance with an embodiment of the present invention, a process may be standardized, with focused customization available on a macro and/or micro level. | 07-26-2012 |
20120192188 | Resource Allocator With Knowledge-Based Optimization - An automated resource allocation technique for scheduling a batch computer job in a multi-computer system environment. According to example embodiments, resource allocation processing may be performed when receiving a batch computer job that needs to be run by a software application executable on more than one computing system in the multi-computer system environment. The job may be submitted for pre-processing analysis by the software application. A pre-processing analysis result comprising job evaluation information may be received from the software application and the result may be evaluated to select a computing system in the multi-computer system environment that is capable of executing the application to run the job. The job may be submitted to the selected computing system to have the software application run the job to completion. | 07-26-2012 |
20120198456 | REDUCING THE NUMBER OF OPERATIONS PERFORMED BY A PERSISTENCE MANAGER AGAINST A PERSISTENT STORE OF DATA ITEMS - Method, apparatus, and computer program product for reducing the number of operations performed by a persistence manager against a persistent store of data items. A plurality of requests from an application are received. Each request is mapped into a transaction for performance against the persistent store, each transaction having at least one operation. Transactions are accumulated and preprocessed to reduce the number of operations for performance against the persistent store. | 08-02-2012 |
20120204180 | MANAGING JOB EXECUTION - A method, system or computer usable program product for managing jobs scheduled for execution on a target system in which some jobs may spawn additional jobs scheduled for execution on the target system including intercepting jobs scheduled for execution in the target system, determining whether there is resource sufficiency in the target system for executing jobs, responsive to an affirmative determination of resource sufficiency, releasing previously intercepted jobs for execution in the target system, computing a limit of a number of jobs which can be concurrently scheduled by an external system to the target system, and transmitting the computed limit to the external system. | 08-09-2012 |
20120222032 | MONITORING REAL-TIME COMPUTING RESOURCES - Techniques used to enhance the execution of long-running or complex software application instances and jobs on computing systems. In one embodiment, inadequate system resources and failure of a job execution on the computing system may be predicted. A determination may be made as to whether inadequate resources exist prior to execution of the job, and resource requirements may be monitored to detect in real time if inadequate resources will be encountered during the job execution for cases where, for example, resource availability has unexpectedly decreased. If a resource deficiency is predicted on the executing computer system, the job may be paused and corrective action may be taken or a user may be alerted. The job may resume after the resource deficiency is met. Additional embodiments may integrate resource monitoring with the adaptive selection of a computer system or application execution environment based on resource capability predictions and benchmarks. | 08-30-2012 |
20120246651 | SYSTEM AND METHOD FOR SUPPORTING BATCH JOB MANAGEMENT IN A DISTRIBUTED TRANSACTION SYSTEM - A system and method can support batch job management in a distributed system using a queue system with a plurality of queues and one or more job management servers. The queue system can represent a life cycle for executing a job by a job execution component, with each queue in the queue system adapted to receive one or more messages that represent a job status in the life cycle for executing the job. The one or more job management servers in the distributed system can direct the job execution component to execute the job, with each job management server monitoring one or more queues in the queue system, and performing at least one operation on the one or more messages in the queue system corresponding to a change of a job status for executing the job. | 09-27-2012 |
20120272246 | DYNAMICALLY SCALABLE PER-CPU COUNTERS - Embodiments include a multiprocessing method including obtaining a local count of a processor event at each of a plurality of processors in a multiprocessor system. A total count of the processor event is dynamically updated to include the local count at each processor having reached an associated batch size. The batch size associated with one or more of the processors is dynamically varied according to the value of the total count. | 10-25-2012 |
20120284719 | DISTRIBUTED MULTI-PHASE BATCH JOB PROCESSING - A distributed job-processing environment including a server, or servers, capable of receiving and processing user-submitted job queries for data sets on backend storage servers. The server identifies computational tasks to be completed on the job as well as a time frame to complete some of the computational tasks. Computational tasks may include, without limitation, preprocessing, parsing, importing, verifying dependencies, retrieving relevant metadata, checking syntax and semantics, optimizing, compiling, and running. The server performs the computational tasks, and once the time frame expires, a message is transmitted to the user indicating which tasks have been completed. The rest of the computational tasks are subsequently performed, and eventually, job results are transmitted to the user. | 11-08-2012 |
20120284720 | HARDWARE ASSISTED SCHEDULING IN COMPUTER SYSTEM - Apparatus and methods for hardware assisted scheduling of software tasks in a computer system are disclosed. For example, a computer system comprises a first pool for maintaining a set of executable software threads, a first scheduler, a second pool for maintaining a set of active software threads, and a second scheduler. The first scheduler assigns a subset of the set of executable software threads to the set of active software threads and the second scheduler dispatches one or more threads from the set of active software threads to a set of hardware threads for execution. In one embodiment, the first scheduler is implemented as part of the operating system of the computer system, and the second scheduler is implemented in hardware. | 11-08-2012 |
20120284721 | SYSTEMS AND METHOD FOR DYNAMICALLY THROTTLING TRANSACTIONAL WORKLOADS | 11-08-2012 |
20120284722 | METHOD FOR DYNAMICALLY THROTTLING TRANSACTIONAL WORKLOADS | 11-08-2012 |
20120284723 | TRANSACTIONAL UPDATING IN DYNAMIC DISTRIBUTED WORKLOADS - A workload manager is operable with a distributed transaction processor having a plurality of processing regions and comprises: a transaction initiator region for initiating a transaction; a transaction router component for routing an initiated transaction to one of the plurality of processing regions; an affinity controller component for restricting transaction routing operations to maintain affinities; the affinity controller component characterised in comprising a unit of work affinity component operable with a resource manager at the one of the plurality of processing regions to activate an affinity responsive to completion of a recoverable data operation at the one of the plurality of processing regions. | 11-08-2012 |
20120311588 | FAULT TOLERANT BATCH PROCESSING - Among other aspects disclosed are a method and system for processing a batch of input data in a fault tolerant manner. The method includes reading a batch of input data including a plurality of records from one or more data sources and passing the batch through a dataflow graph. The dataflow graph includes two or more nodes representing components connected by links representing flows of data between the components. At least one but fewer than all of the components includes a checkpoint process for an action performed for each of multiple units of work associated with one or more of the records. The checkpoint process includes opening a checkpoint buffer stored in non-volatile memory at the start of processing for the batch. | 12-06-2012 |
20130007750 | TRANSACTION AGGREGATION TO INCREASE TRANSACTION PROCESSING THROUGHOUT - Provided are techniques for increasing transaction processing throughput. A transaction item with a message identifier and a session identifier is obtained. The transaction item is added to an earliest aggregated transaction in a list of aggregated transactions in which no other transaction item as the same session identifier. A first aggregated transaction in the list of aggregated transactions that has met execution criteria is executed. In response to determining that the aggregated transaction is not committing, the aggregated transaction is broken up into multiple smaller aggregated transactions and a target size of each aggregated transaction is adjusted based on measurements of system throughput. | 01-03-2013 |
20130024863 | SYSTEM AND METHOD FOR PROVIDING DYNAMIC TRANSACTION OPTIMIZATIONS - A system and method for providing dynamic transaction optimizations, such as dynamic XA transaction optimizations. In accordance with an embodiment, the system enables monitoring of transactional behavior in an application during runtime, in order to provide a feedback loop. The application/transaction information in the feedback loop can be analyzed by a transaction manager to determine an indication as to whether a particular optimization, such as an isSameRM optimization, will provide a benefit or not. The optimization can then be applied accordingly. In accordance with various embodiments, such determination can be made transparently, so that its enablement is not detectable to, e.g., an end-application, or a system administrator, even though the distribution and type of XA calls may be detected through system monitoring. The feature can be used to improve the performance of transaction processing in a transaction-based system. | 01-24-2013 |
20130055268 | AUTOMATED WEB TASK PROCEDURES BASED ON AN ANALYSIS OF ACTIONS IN WEB BROWSING HISTORY LOGS - Embodiments of the invention relate to generating automated web task procedures from an analysis of web history logs. One aspect of the invention concerns a method that comprises identifying sequences of related web actions from a web log, grouping each set of similar web actions into an action class, and mapping the sequences of related web actions into sequences of action classes. The method further clusters each group of similar sequences of action classes into a cluster, wherein relationships among the action classes in the cluster are represented by a state machine, and generates automated web task procedures from the state machine. | 02-28-2013 |
20130055269 | TRANSACTION CONCURRENT EXECUTION CONTROL SYSTEM, TRANSACTION CONCURRENT EXECUTION CONTROL METHOD AND PROGRAM - A transaction concurrent execution control system carries out a control of concurrently executing a transaction. The transaction concurrent execution control system includes a transaction execution unit for executing the transaction, a back-off time determination unit for determining a waiting time until the transaction is re-executed when a commitment of the transaction has failed, and a transaction pooling unit for causing the transaction to stand by for re-execution until the waiting time has elapsed when the commitment of the transaction has failed. | 02-28-2013 |
20130067478 | RESOURCE MANAGEMENT SYSTEM - Provided are: information acquisition unit that periodically acquires usage state information of resource by load; user terminal that creates permitted usage period data; period setting unit that sets each load's permitted usage period based on permitted usage period data; determination unit that determines whether each load's resource usage is within permitted usage period; and display unit that distinctively displays whether resource usage period is within permitted usage period based on determination result by determination unit. User terminal creates single batch permitted usage period data. Period setting unit includes batch setting unit that performs batch setting whereby batch permitted usage period is set as permitted usage periods of all loads. | 03-14-2013 |
20130074079 | SYSTEM AND METHOD FOR FLEXIBLE DATA TRANSFER - A method and system for flexibly transferring data from one or more data sources to one or more data destinations within an information network where each of the one or more data sources have data in a particular source format and each of the one or more data destinations have data in the same or another particular destination format using a parameter database that includes parameters to control the transfer of data, a scheduler that initiates the transfer of data, and a data loader in communications with the parameter database and scheduler that, upon initiation by the scheduler, extracts data from the one or more data sources, manipulates the extracted source data into one or more destination formats associated with the one or more data destinations, and inserts the data into one or more data destinations according to the parameters within the parameter database. | 03-21-2013 |
20130081025 | Adaptively Determining Response Time Distribution of Transactional Workloads - An adaptive mechanism is provided that learns the response time characteristics of a workload by measuring the response times of end user transactions, classifies response times into buckets, and dynamically adjusts the response time distribution as response time characteristics of the workload change. The adaptive mechanism maintains the actual distribution across changes and, thus, helps the end user to understand changes of workload behavior that take place over a longer period of time. The mechanism is stable enough to suppress spikes and returns a constant view of workload behavior, which is required for long term, performance analysis and capacity planning. The mechanism distinguishes between an initial learning phase of establishing the distribution and one or multiple reaction periods. The reaction periods can be for example a fast reaction period for strong fluctuations of the workload behavior and a slow reaction period for small deviations. | 03-28-2013 |
20130086588 | System and Method of Using Transaction IDS for Managing Reservations of Compute Resources Within a Compute Environment - A system and method for reserving resources within a compute environment such as a cluster or grid are disclosed. The method aspect of the disclosure includes receiving a request for resource availability in a compute environment from a requestor, associating a transaction identification with the request and resources within the compute environment that can meet the request and presenting the transaction identification to the requestor. The transaction ID can also be associated with a time frame in which resources are available and can also be associated with modifications to the resources and supersets of resources that could be drawn upon to meet the request. The transaction ID can also be associated with metrics that identify how well the resource fit with the request and modifications that can make the resources better match the workload which would be submitted under the request. | 04-04-2013 |
20130104131 | LOAD CONTROL DEVICE - A load control device | 04-25-2013 |
20130132960 | USB REDIRECTION FOR READ TRANSACTIONS - Methods and systems for conducting a transaction between a virtual USB device driver and a USB device are provided. A virtual USB manager of a hypervisor receives a one or more data packets from a client. The virtual USB manager stores of the one or more data packets in a buffer. The virtual USB manager dequeues a data packet from the buffer. The virtual USB manager transmits the data packet to the virtual USB device driver for processing. | 05-23-2013 |
20130145371 | BATCH PROCESSING OF BUSINESS OBJECTS - A service consumer may define batch jobs (batch containers) in which business object methods can be invoked on business object instances. The invocations may be recorded. The service consumer may trigger batch execution to cause the business object instances to be modified in accordance with the recorded invocations. The batch job can be executed as a single transaction in a single process. The batch job can be partitioned into multiple transactions and processed by respective multiple processes. | 06-06-2013 |
20130179888 | Application Load Balancing Utility - Methods, computer readable media, and apparatuses for balancing the number of transaction requests with the number of applications running and processing information for those transaction requests are presented. According to one or more aspects, a message queue receives one or more messages, each including a transaction request, from a computing device. The message queue sends a trigger message to a trigger queue. The load balancing utility monitors the number of messages in the message queue and determines a number of transaction requests to process and starts a number of additional applications to process the additional transaction requests. The applications process the transaction requests and send a response for each of the transaction requests to the message queue. The message queue sends the response back to the computing device. | 07-11-2013 |
20130179889 | MANAGING JOB EXECUTION - A method for managing jobs scheduled for execution on a target system in which some jobs may spawn additional jobs scheduled for execution on the target system including intercepting jobs scheduled for execution in the target system, determining whether there is resource sufficiency in the target system for executing jobs, responsive to an affirmative determination of resource sufficiency, releasing previously intercepted jobs for execution in the target system, computing a limit of a number of jobs which can be concurrently scheduled by an external system to the target system, and transmitting the computed limit to the external system. | 07-11-2013 |
20130198749 | SPECULATIVE THREAD EXECUTION WITH HARDWARE TRANSACTIONAL MEMORY - In an embodiment, if a self thread has more than one conflict, a transaction of the self thread is aborted and restarted. If the self thread has only one conflict and an enemy thread of the self thread has more than one conflict, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread and the enemy thread only conflicts with the self thread and the self thread has a key that has a higher priority than a key of the enemy thread, the transaction of the self thread is committed. If the self thread only conflicts with the enemy thread, the enemy thread only conflicts with the self thread, and the self thread has a key that has a lower priority than the key of the enemy thread, the transaction of the self thread is aborted. | 08-01-2013 |
20130219395 | BATCH SCHEDULER MANAGEMENT OF TASKS - A request from a client to perform a task is received. The client has a predetermined limit of compute resources. The task is dispatched from a batch scheduler to a compute node as a non-speculative task if a quantity of compute resources is available at the compute node to process the task, and the quantity of compute resources in addition to a total quantity of compute resources being utilized by the client is less than or equal to the predetermined limit, such that the non-speculative task is processed without being preempted by an additional task requested by an additional client. The task is dispatched, from the batch scheduler to the compute node, as a speculative task if the quantity of compute resources is available to process the task, and the quantity of compute resources in addition to the total quantity of compute resources is greater than the predetermined limit. | 08-22-2013 |
20130219396 | TRANSACTION PROCESSING SYSTEM AND METHOD - According to one example of the present invention, there is provided a transaction processing system. The transaction processing system comprises a transaction analyzer for determining characteristics of a received transaction, a processing agent selector for selecting, based on the determined characteristics, a processing agent for processing the received transaction, and a dispatcher for dispatching the received transaction and the selected processing agent to a processing resource to cause the transaction to be processed in accordance with the selected processing agent on at least one of the computing devices. | 08-22-2013 |
20130247050 | BATCH PROCESSING SYSTEM - The second computer detects performance of processing to record the execution status of a batch job on a storing device, selects a recording method to be used from among a plurality of recording methods according to the detected performance, and notifies the first computer of the result. The first computer records the execution status of the batch job, executed in the own computer, on the storing device, using the recording method notified from the second computer. | 09-19-2013 |
20130254771 | SYSTEMS AND METHODS FOR CONTINUAL, SELF-ADJUSTING BATCH PROCESSING OF A DATA STREAM - Methods, systems and apparatus are described herein that include processing a data stream as a sequence of batch jobs during collection of data in the data stream. Processing of successive batch jobs in the sequence includes creating a particular batch job upon completion of processing of a preceding batch job in the sequence. The particular batch job has a batch size that depends upon an amount of data in the data stream that has been collected since creation of the preceding batch job in the sequence, such that the batch size of the particular batch job self-adjusts to data rate changes in the data stream. The particular batch job is then processed to produce resulting data, where processing efficiency and processing time for the particular batch increase with the batch size. | 09-26-2013 |
20130275984 | MULTIPROCESSING TRANSACTION RECOVERY MANAGER - A multiprocessing transaction recovery manager, operable with a transactional application manager and a resource manager, comprises a threadsafety indicator for receiving and storing positive and non-positive threadsafety data of at least one transactional component managed by one of the transactional application manager and the resource manager; a commit protocol component for performing commit processing for the at least one transactional component; and a thread selector responsive to positive threadsafety data for selecting a single thread for the commit processing to be performed by the commit protocol component. The thread selector is further operable to select plural threads for the commit processing to be performed by the commit protocol component responsive to non-positive threadsafety data. | 10-17-2013 |
20130275985 | METHOD, APPARATUS, AND SYSTEM TO HANDLE TRANSACTIONS RECEIVED AFTER A CONFIGURATION CHANGE REQUEST - Methods, apparatuses, and systems for handling transactions received after a configuration request, the method, for example, comprising: receiving a configuration change request by a transaction-handling logic block; performing a configuration change by the transaction-handling logic block in response to the configuration change request, wherein the logic block is to handle transactions received prior to receipt of the configuration change request differently than transactions received after receipt of the configuration change request; receiving, by the transaction-handling logic block, a first transaction before receiving the configuration change request; receiving, by the transaction-handling logic block, a second transaction after receiving the configuration change request and before the configuration change is complete; differentiating the first transaction from the second transaction based on the order in which the first and second transactions were received relative to receipt of the configuration change request; and handling the first and second transactions. | 10-17-2013 |
20130283276 | METHOD AND SYSTEM FOR MINIMAL SET LOCKING WHEN BATCHING RESOURCE REQUESTS IN A PORTABLE COMPUTING DEVICE - Requests of a PCD are determined if they are part of a transaction involving a plurality of resources. Next, each resource that is part of the request involving multiple resources is identified. As each resource is identified, a framework manager determines if a resource has completed processing the request directed at it. If the resource has returned a value that it has completed the request, then the framework manager allows the resource to return to an unlocked state while other requests in the transaction are being processed. If the resource has not completed processing and has deferred some of the processing to the end of the transaction, then the resource is added to a deferred unlock list. It is determined if the resource is a dependent on another resource in the current request path. If it is dependent, then the other resource is also placed on the deferred unlock list. | 10-24-2013 |
20130290965 | Accessing Time Stamps During Transactions in a Processor - The described embodiments include a processor that handles operations during transactions. In these embodiments, the processor comprises one or more cores. During operation, at least one core is configured to monitor the acquisition of time stamps during transactions. The at least one core is further configured to prevent the acquisition of time stamps that meet predetermined conditions. | 10-31-2013 |
20130326522 | METHOD FOR HANDLING ACCESS TRANSACTIONS AND RELATED SYSTEM - In an embodiment, access transactions of at least one module of a system such as a System-on-Chip (SoC) to one of a plurality of target modules, such as memories, are managed by assigning transactions identifiers subjected to a consistency check. If an input identifier to the check has already been issued for the same given target module, to the related identifier/given target module pair the same input identifier is assigned as a consistent output identifier. If, on the contrary, said input identifier to the check has not been already issued or has already been issued for a target module different from the considered one, to the related identifier/given target module pair a new identifier, different from the input identifier, is assigned as a consistent output identifier. | 12-05-2013 |
20130339960 | TRANSACTION BEGIN/END INSTRUCTIONS - A TRANSACTION BEGIN instruction and a TRANSACTION END instruction are provided. The TRANSACTION BEGIN instruction causes either a constrained or nonconstrained transaction to be initiated, depending on a field of the instruction. The TRANSACTION END instruction ends the transaction started by the TRANSACTION BEGIN instruction. | 12-19-2013 |
20130339961 | TRANSACTIONAL PROCESSING - A transaction is initiated via a transaction begin instruction. During execution of the transaction, the transaction may abort. If the transaction aborts, a determination is made as to the type of transaction. Based on the transaction being a first type of transaction, resuming execution at the transaction begin instruction, and based on the transaction being a second type, resuming execution at an instruction following the transaction begin instruction. Regardless of transaction type, resuming execution includes restoring one or more registers specified in the transaction begin instruction and discarding transactional stores. For one type of transaction, the nonconstrained transaction, the resuming includes storing information in a transaction diagnostic block. | 12-19-2013 |
20130339962 | TRANSACTION ABORT PROCESSING - A transaction executing within a computing environment ends prior to completion; i.e., execution is aborted. Pursuant to aborting execution, a hardware transactional execution CPU mode is exited, and one or more of the following is performed: restoring selected registers; committing nontransactional stores on abort; branching to a transaction abort program status word specified location; setting a condition code and/or abort code; and/or preserving diagnostic information. | 12-19-2013 |
20140007110 | NORMALIZED INTERFACE FOR TRANSACTION PROCESSING SYSTEMS | 01-02-2014 |
20140019977 | SYSTEM AND METHOD FOR ECONOMICAL MIGRATION OF LEGACY APPLICATIONS FROM MAINFRAME AND DISTRIBUTED PLATFORMS - An economical system and method of migrating legacy applications running on proprietary mainframe computer systems and distributed networks to commodity hardware-based software frameworks, by offloading the batch processing from the legacy systems, and returning the resultant data to the original legacy system to be consumed by the unaltered applications. An open source code tool is used to transfer the software, and rewrite it on a faster and more economical hardware system, while leaving a seamless integration of offloaded processing with existing batch processing flow. | 01-16-2014 |
20140019978 | System and Method of Providing A Fixed Time Offset Based Dedicated Co-Allocation of a Common Resource Set - Disclosed are a system, method and computer-readable medium relating to managing resources within a compute environment having a group of nodes or computing devices. The method comprises, for each node in the compute environment: traversing a list jobs having a fixed time relationship, wherein for each job in the list, the following steps occur: obtaining a range list of available timeframes for each job, converting each availability timeframe to a start range, shifting the resulting start range in time by a job offset, for a first job, copying the resulting start range into a node range, and for all subsequent jobs, logically AND'ing the start range with the node range. Next, the method comprises logically OR'ing the node range with a global range, generating a list of acceptable resources on which to start and the timeframe at which to start and creating reservations according to the list of acceptable resources for the resources in the group of computing devices and associated job offsets. | 01-16-2014 |
20140019979 | AUTOMATED WEB TASK PROCEDURES BASED ON AN ANALYSIS OF ACTIONS IN WEB BROWSING HISTORY LOGS - Embodiments of the invention relate to generating automated web task procedures from an analysis of web history logs. One aspect of the invention concerns a method that comprises identifying sequences of related web actions from a web log, grouping each set of similar web actions into an action class, and mapping the sequences of related web actions into sequences of action classes. The method further clusters each group of similar sequences of action classes into a cluster, wherein relationships among the action classes in the cluster are represented by a state machine, and generates automated web task procedures from the state machine. | 01-16-2014 |
20140033209 | Handling of Barrier Commands for Computing Systems - A computing system for handling barrier commands includes a memory, an interface, and a processor. The memory is configured to store a pre-barrier spreading range that identifies a target computing system associated with a barrier command. The interface is coupled to the memory and is configured to send a pre-barrier computing probe to the target computing system identified in the pre-barrier spreading range and receive a barrier completion notification messages from the target computing system. The pre-barrier computing probe is configured to instruct the target computing system to monitor a status of a transaction that needs to be executed for the barrier command to be completed. The processor is coupled to the interface and is configured to determine a status of the barrier command based on the received barrier completion notification messages. | 01-30-2014 |
20140033210 | Techniques for Attesting Data Processing Systems - A technique for attesting a plurality of data processing systems includes generating a logical grouping for a data processing system. The logical grouping is associated with a rule that describes a condition that must be met in order for the data processing system to be considered trusted. A list of one or more children associated with the logical grouping is retrieved. The one or more children are attested to determine whether each of the one or more children is trusted. In response to the attesting, the rule is applied to determine whether the condition has been met in order for the data processing system to be considered trusted. A plurality of logical groupings is associated to determine whether an associated plurality of data processing systems can be considered trusted. | 01-30-2014 |
20140040898 | DISTRIBUTED TRANSACTION PROCESSING - A system includes an initiator and processing nodes. The initiator distributes portions of a transaction among the processing nodes. Each processing node has at least one downstream neighbor to which the processing node sends commit messages. The commit messages include a commit status of the processing node. The downstream neighbor is also a processing node. | 02-06-2014 |
20140053159 | FAULT TOLERANT BATCH PROCESSING - Among other aspects disclosed are a method and system for processing a batch of input data in a fault tolerant manner. The method includes reading a batch of input data including a plurality of records from one or more data sources and passing the batch through a dataflow graph. The dataflow graph includes two or more nodes representing components connected by links representing flows of data between the components. At least one but fewer than all of the components includes a checkpoint process for an action performed for each of multiple units of work associated with one or more of the records. The checkpoint process includes opening a checkpoint buffer stored in non-volatile memory at the start of processing for the batch. | 02-20-2014 |
20140053160 | METHODS AND SYSTEMS FOR BATCH PROCESSING IN AN ON-DEMAND SERVICE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for batch processing in an on-demand service environment. For example, in one embodiment, mechanisms include receiving a processing request for a multi-tenant database, in which the processing request specifies processing logic and a processing target group within the multi-tenant database. Such an embodiment further includes dividing or chunking the processing target group into a plurality of processing target sub-groups, queuing the processing request with a batch processing queue for the multi-tenant database among a plurality of previously queued processing requests, and releasing each of the plurality of processing target sub-groups for processing in the multi-tenant database via the processing logic at one or more times specified by the batch processing queue. | 02-20-2014 |
20140068617 | SYSTEM AND METHOD FOR RECEIVING ADVICE BEFORE SUBMITTING BATCH JOBS - Described herein are systems and methods for receiving a recommendation before submitting a work request. As described herein, an indication of a work request, a recommendation request and a set of application server properties are received at a recommendation engine. The recommendation engine processes the recommendation request, and based on the set of application server properties, determines a recommendation on whether to submit the work request and/or whether to schedule the work request for a later time. Thereafter, the recommendation engine generates a recommendation notification that indicates whether to submit/schedule the work request to provide for a proactive approach to submitting the work request. | 03-06-2014 |
20140068618 | AUTOMATIC BATCHING OF GUI-BASED TASKS - Described herein are techniques for automatically batching GUI-based (Graphical User Interface) tasks. The described techniques include automatically determining whether a user is performing batchable tasks in a GUI-based environment. Once detected, the described techniques include predicting the next tasks of a batch based upon those detected batchable tasks. With the described techniques, the user may be asked to verify and/or correct the predicted next tasks. Furthermore, the described techniques may include performing a batch and doing so without user interaction. | 03-06-2014 |
20140068619 | Scheduling in a multicore architecture - This invention relates to scheduling threads in a multicore processor. Executable transactions may be scheduled using at least one distribution queue, which lists executable transactions in order of eligibility for execution, and multilevel scheduler which comprises a plurality of linked individual executable transaction schedulers. Each of these includes a scheduling algorithm for determining the most eligible executable transaction for execution. The most eligible executable transaction is outputted from the multilevel scheduler to the at least one distribution queue. | 03-06-2014 |
20140075441 | METHOD AND APPARATUS FOR RECORDING AND PROFILING TRANSACTION FAILURE SOURCE ADDRESSES IN HARDWARE TRANSACTIONAL MEMORIES - A processor core includes a transactional memory, a transaction failure instruction address register (TFIAR), and a transaction failure data address register (TFDAR). The transactional memory stores information of a plurality of transactions executed by the processor core. The processor core retrieves instruction and data address associated with the aborted transaction from TFIAR and TFDAR respectively and stores them into a profiling table. The processor core then generates profiling information based on instruction and data addresses associated with the aborted transaction. | 03-13-2014 |
20140075442 | BATCH SCHEDULING - There is provided a method to schedule execution of a plurality of batch jobs by a computer system. The method includes: reading one or more constraints that constrain the execution of the plurality of batch jobs by the computer system and a current load on the computer system; grouping the plurality of batch jobs into at least one run frequency that includes at least one batch job; setting the at least one run frequency to a first run frequency; computing a load generated by each batch job in the first run frequency on the computer system based on each batch job's start time; and determining an optimized start time for each batch job in the first run frequency that meets the one or more constraints and that distributes each batch job's load on the computer system using each batch job's computed load and the current load. | 03-13-2014 |
20140101660 | Recognition Techniques to Enhance Automation In a Computing Environment - Systems and methods for detecting end of a transaction in a computing environment are provided. The method comprises determining a target area in a graphical user environment displayed on a display screen, wherein a change is expected to occur when end of a transaction is reached; masking the target area at least partially to remove content included in the target area that is present before or after the transaction was initiated; monitoring the target area for change in content; and detecting the end of the transaction when the content of the target area has changed. | 04-10-2014 |
20140115589 | SYSTEM AND METHOD FOR BATCH EVALUATION PROGRAMS - A batching module that inspects call stacks within a stack evaluator to identify current expressions that can be evaluated in batch with other expressions. If such expressions are identified, the corresponding stacks are blocked from further processing and a batch processing request for processing the expressions is transmitted to the application server. The application server processes the expressions in batch and generates a value for each of the expressions. The blocked stacks are then populated with the values for the expressions. | 04-24-2014 |
20140115590 | METHOD AND APPARATUS FOR CONDITIONAL TRANSACTION ABORT AND PRECISE ABORT HANDLING - A method for executing a transaction in a data processing system initiates the transaction by a transactional-memory system coupled to that memory component. The method includes initiating the transaction by a transactional-memory system that is part of a memory component of the data processing system. The transaction includes instructions for comparing multiple parameters, and aborting the transaction by the transactional-memory system based upon a comparison of the multiple parameters. | 04-24-2014 |
20140123144 | WORK-QUEUE-BASED GRAPHICS PROCESSING UNIT WORK CREATION - One embodiment of the present invention enables threads executing on a processor to locally generate and execute work within that processor by way of work queues and command blocks. A device driver, as an initialization procedure for establishing memory objects that enable the threads to locally generate and execute work, generates a work queue, and sets a GP_GET pointer of the work queue to the first entry in the work queue. The device driver also, during the initialization procedure, sets a GP_PUT pointer of the work queue to the last free entry included in the work queue, thereby establishing a range of entries in the work queue into which new work generated by the threads can be loaded and subsequently executed by the processor. The threads then populate command blocks with generated work and point entries in the work queue to the command blocks to effect processor execution of the work stored in the command blocks. | 05-01-2014 |
20140137120 | MANAGING TRANSACTIONS WITHIN AN APPLICATION SERVER - A system and method for managing transactions in an application server is described. In some example embodiments, the system registers to receive notifications from a timeout manager associated with a transaction (e.g., a database query). If the transaction becomes locked or runs longer than anticipated, the system receives a notification indicating a timeout event. The system, upon receiving the event notification, may then cancel the transaction or perform other actions to notify an application that initiated the transaction, such as via a newly created thread. | 05-15-2014 |
20140149987 | Batch Jobs Using Positional Scheduling Policies of Mobile Devices - Mechanisms are provided for executing a batch job associated with a mobile device. A batch job data structure is retrieved that defines a batch job having a plurality of operations to be executed and a scheduling rule having one or more criteria is retrieved. The one or more criteria comprises at least one of a geographical position criteria or a geographical movement criteria for defining a position or path of motion of the mobile device required for initiating execution of the batch job. A determination is made as to whether one of current or predicted future position or path of motion of the mobile device satisfies the criteria of the scheduling rule. In response to the current or predicted future position or path of motion of the mobile device satisfying the criteria of the scheduling rule, execution of the batch job is initiated. | 05-29-2014 |
20140157275 | DISTRIBUTED COMPUTING METHOD AND DISTRIBUTED COMPUTING SYSTEM - A distributed computing method and distributed computing system are provided. Said distributed computing method includes: distributedly computing an input task stream; reducing the computation results of said distributed computation; and storing the reduced computation results in reduction buffers. Said distributed computing system includes distributed computing device which are used for the distributed computation, multiple reduction units which are used for reducing the computation results of said distributed computation, one or more reduction buffer which are used for storing reduced computation results, and a reduction control device which is used for controlling the reduction from said computation results to said reduction buffers and the access to the reduction buffer. | 06-05-2014 |
20140157276 | DISTRIBUTED TRANSACTION ROUTING - Embodiments relate to routing a distributed transaction. An aspect includes receiving and storing distributed transaction initiation information from a transaction manager. Another aspect includes sending an okay message to the transaction manager. Another aspect includes receiving a distributed transaction having at least one function type from the transaction manager. Another aspect includes determining a resource manager for the distributed transaction based on the function type of the distributed transaction. Another aspect includes sending, based on the resource manager being ready, the distributed transaction to the determined resource manager. | 06-05-2014 |
20140181821 | Methods and Systems for Enhancing Hardware Transactions Using Hardware Transactions in Software Slow-Path - Hybrid transaction memory systems and accompanying methods. A transaction to be executed is received, and an initial attempt is made to execute the transaction in a hardware path. Upon a failure to successfully execute the transaction in the hardware path, an attempt is made to execute the transaction in a hardware-software path. The hardware-software path includes a software path and at least one hardware transaction. | 06-26-2014 |
20140189693 | ADAPTIVE HANDLING OF PRIORITY INVERSIONS USING TRANSACTIONS - An operating system of a data processing system receives a request from a first process to acquire an exclusive lock for accessing a resource of the data processing system. A second priority of a second process is increased to reduce total execution time. The second process is currently in possession of the exclusive lock for performing a transactional operation with the resource. The second priority was lower than a first priority of the first process. The operating system notifies the second process to indicate that another process is waiting for the exclusive lock to allow the second process to complete or roll back the transactional operation and to release the exclusive lock thereafter. | 07-03-2014 |
20140201747 | CROSS PLATFORM WORKFLOW MANAGEMENT - A method and system for real-time monitoring of processes to obtain job data of jobs running on different non-compatible platforms with a Java monitoring agent, then saving, reporting and making the job data available at any time for viewing by a system administrator on a single display monitor. | 07-17-2014 |
20140208324 | RATE OF OPERATION PROGRESS REPORTING - According to one aspect of the present disclosure, a method and technique for rate of operation progress reporting is disclosed. The method includes: responsive to completion by an application of one or more batch operations, storing an operation count corresponding to each completed batch operation; and, responsive to being polled by a monitoring module: identifying a time reporting window for the batch operations; and reporting a rate of progress meter value for the batch operations to the monitoring module based on the operation counts and the time reporting window. | 07-24-2014 |
20140282556 | METHODS AND SYSTEMS FOR BATCH PROCESSING IN AN ON-DEMAND SERVICE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for batch processing in an on-demand service environment. For example, in one embodiment, mechanisms include receiving a processing request for a multi-tenant database, in which the processing request specifies processing logic and a processing target group within the multi-tenant database. Such an embodiment further includes dividing or chunking the processing target group into a plurality of processing target sub-groups, queuing the processing request with a batch processing queue for the multi-tenant database among a plurality of previously queued processing requests, and releasing each of the plurality of processing target sub-groups for processing in the multi-tenant database via the processing logic at one or more times specified by the batch processing queue. | 09-18-2014 |
20140298342 | TRANSACTIONAL LOCK ELISION WITH DELAYED LOCK CHECKING - Avoiding data conflicts includes initiating a transactional lock elision transaction containing a critical section, executing the transactional lock elision transaction including the critical section, and checking a status of a lock prior to a commit point in the transactional lock elision transaction executing, wherein the checking the status occurs after processing the critical section. A determination of whether the status of the lock checked is free is made and, responsive to a determination the lock checked is free, a result of the transactional lock elision transaction is committed. | 10-02-2014 |
20140317626 | PROCESSOR FOR BATCH THREAD PROCESSING, BATCH THREAD PROCESSING METHOD USING THE SAME, AND CODE GENERATION APPARATUS FOR BATCH THREAD PROCESSING - A processor for batch thread processing includes a central register file, and one or more function unit batches each including two or more function units and one or more ports to access the central register file. The function units of the function unit batches execute an instruction batch including one or more instructions to sequentially execute the one or more instructions in the instruction batch. | 10-23-2014 |
20140344813 | SCHEDULING HOMOGENEOUS AND HETEROGENEOUS WORKLOADS WITH RUNTIME ELASTICITY IN A PARALLEL PROCESSING ENVIRONMENT - Systems and methods are provided for scheduling homogeneous workloads including batch jobs, and heterogeneous workloads including batch and dedicated jobs, with run-time elasticity wherein resource requirements for a given job can change during run-time execution of the job. | 11-20-2014 |
20140344814 | SCHEDULING HOMOGENEOUS AND HETEROGENEOUS WORKLOADS WITH RUNTIME ELASTICITY IN A PARALLEL PROCESSING ENVIRONMENT - Systems and methods are provided for scheduling homogeneous workloads including batch jobs, and heterogeneous workloads including batch and dedicated jobs, with run-time elasticity wherein resource requirements for a given job can change during run-time execution of the job. | 11-20-2014 |
20140344815 | CONTEXT SWITCHING MECHANISM FOR A PROCESSING CORE HAVING A GENERAL PURPOSE CPU CORE AND A TIGHTLY COUPLED ACCELERATOR - An apparatus is described having multiple cores, each core having: a) an accelerator; and, b) a general purpose CPU coupled to the accelerator. The general purpose CPU has functional unit logic circuitry to execute an instruction that returns an amount of storage space to store context information of the accelerator. | 11-20-2014 |
20140359626 | PARALLEL METHOD FOR AGGLOMERATIVE CLUSTERING OF NON-STATIONARY DATA - The disclosure is directed to clustering a stream of data points. An aspect receives the stream of data points, determines a plurality of cluster centroids, divides the plurality of cluster centroids among a plurality of threads and/or processors, assigns a portion of the stream of data points to each of the plurality of threads and/or processors, and combines a plurality of clusters generated by the plurality of threads and/or processors to generate a global universe of clusters. An aspect assigns a portion of the stream of data points to each of a plurality of threads and/or processors, wherein each of the plurality of threads and/or processors determines one or more cluster centroids and generates one or more clusters around the one or more cluster centroids, and combines the one or more clusters from each of the plurality of threads and/or processors to generate a global universe of clusters. | 12-04-2014 |
20140359627 | RECOVERING STEP AND BATCH-BASED PROCESSES - A method of recovering batch-based processes may include providing an interface for receiving processes recoverability information. The recoverability information may include (i) information describing a mutual exclusivity of data affected by a process, (ii) information describing sub-processes associated with the process, and/or (iii) information describing scope cleanup procedures associated with the process. The method may also include receiving the recoverability information through the interface, and receiving an indication that the process experienced an error while being executed on a client system. The method may additionally include providing the process recoverability information to make a recoverability determination for the process. | 12-04-2014 |
20140373016 | SYSTEM FOR PARTITIONING BATCH PROCESSES - A system for processing a batch job comprises a processor and a memory. The processor is configured to receive a job name for a job submitted to execute, to receive one or more job parameters, and to determine one or more nodes to run the job. The processor is configured to determine one or steps, where for each step: a step is executed on a node using a state of data associated with a start state of the step; and upon completion of executing the step, a result is stored to a durable storage. The durable storage stores the state of data associated with the start state of the step and the completion state of the step and are accessible by other execution processes as associated with either the start state of the step or the completion state of the step. The memory of the system is coupled to the processor and configured to provide processor with instructions. | 12-18-2014 |
20140373017 | SOFTWARE BUS - The present invention relates to the field of methods of communication between software modules and more particularly software buses. There is described a software bus which allows communication between software modules. This communication occurs within a machine and between machines and operates interchangeably for the software module whether one is dealing with a process, a thread or a simple task. The communication relies on mechanisms adapted to the multitask level at which the sender and receiver software modules operate. It is based on a hierarchical architecture, phases of discovery and of recording of the various software modules having to communicate via the bus. | 12-18-2014 |
20140373018 | Dynamically Adjusting a Log Level of a Transaction - A method dynamically adjusts a log level of a transaction. The method includes: buffering the most detailed logs of a transaction having highest log level into a memory; checking if all dependency-defined transactions within a dependency list/tree for the transaction are completed; and, in response to the completion of all dependency-defined transactions within the dependency list/tree for the transaction, obtaining a log filter level for the transaction in association with the transaction results (success/failure) of dependency-defined transactions, wherein the log filter level is a new log level for the transaction. | 12-18-2014 |
20150020075 | ENTITLEMENT VECTOR FOR MANAGING RESOURCE ALLOCATION - An embodiment or embodiments of an information handling apparatus can use an entitlement vector to simultaneously manage and activate entitlement of objects and processes to various resources independently from one another. An information handling apparatus can comprise an entitlement vector operable to specify resources used by at least one object of a plurality of object. The information handling apparatus can further comprise a scheduler operable to schedule a plurality of threads based at least partly on entitlement as specified by the entitlement vector. | 01-15-2015 |
20150033232 | AUTOMATIC PARALLELISM TUNING FOR APPLY PROCESSES - Techniques are provided for automatic parallelism tuning. At least one batch of change records is assigned to one or more apply processes in a set of active apply processes. A first throughput value is periodically determined based on a number of processed change records in a first time interval. An increment adjustment is periodically performed, including adding an additional apply process, determining a second throughput value, and removing the additional apply process from the set of active apply processes if the second throughput value is not greater than a previous first throughput value by at least an increment threshold. A decrement adjustment is periodically performed, including removing an apply process, determining a third throughput value, and replacing the removed apply process in the set of active apply processes if the third throughput value is not greater than the previous first throughput value by at least a decrement threshold. | 01-29-2015 |
20150082312 | Methods And Systems For Queuing Events - This disclosure relates to methods and systems for queuing events. In one aspect, a method is disclosed that receives or creates an event and inserts the event into a queue. The method determines at least one property of the event and associates a priority with the event based on the property. The method then processes the event in accordance with its priority. | 03-19-2015 |
20150089505 | SYSTEMS AND METHODS FOR FAULT TOLERANT BATCH PROCESSING IN A VIRTUAL ENVIRONMENT - A system for fault tolerant batch processing in a virtual environment is configured to perform batch job execution, the system includes computing devices configured as a virtualized grid cluster by means of a virtualization platform, the cluster includes a centralized storage repository, a grid manager deployed on an instantiated virtual machine and a message bus whereby data and messages are exchanged between the grid manager and one or more grid nodes. The grid manager is configured to manage one or more incoming job requests, queue one or more of the received job requests in a job execution queue and monitor one or more virtual grid nodes. | 03-26-2015 |
20150113535 | PARALLEL DATA PROCESSING SYSTEM, COMPUTER, AND PARALLEL DATA PROCESSING METHOD - A parallel data processing system includes a parallel data processing execution unit for reading a data from a data set including a first data set that includes a plurality of first data and a second data set that includes a plurality of second data and executing processing. The parallel data processing execution unit (A) reads the first data from the first data set, and acquires a first value from the first data based on first format information acquired from an application, (B) generates one or more threads for respectively reading one or more second data corresponding to the first value from the second data set based on first reference information acquired from the application, (C) executes (A) and (B) on one or more first data in the first data set, and (D) executes a plurality of the threads in parallel. | 04-23-2015 |
20150143375 | TRANSACTION EXECUTION IN SYSTEMS WITHOUT TRANSACTION SUPPORT - Interaction between isolated partitioned execution environments may be permitted through transmission of messages. A method for interaction between partitions may include may include receiving, by a processor, a request message comprising a request to execute a transaction application code; creating, by the processor, an isolated execution environment; starting, by the processor, an operating system in the isolated execution environment; and executing, by the processor, the transaction application code in the operating system. | 05-21-2015 |
20150143376 | SYSTEM FOR ERROR CHECKING OF PROCESS DEFINITIONS FOR BATCH PROCESSES - A system for processing a batch job comprises a processor and a memory. The processor is configured to receive a batch job comprising a sequential or parallel flow of operations, wherein each operation has a defined input type and a defined output type. The processor is further configured to verify that the batch job can run successfully, wherein verifying includes checking that a first operation output defined type is compatible with a second operation input defined type when a first operation output is connected to a second operation input, and wherein verifying includes checking that a parameter used by a calculation in an operation is input to the operation. The memory is coupled to the processor and configured to provide the processor with instructions. | 05-21-2015 |
20150150009 | MULTPLE DATASTREAMS PROCESSING BY FRAGMENT-BASED TIMESLICING - Systems and methods for multi-channel signal processing by virtue of packet-based time-slicing with single processing core logic. The processing core logic is configured to receive data streams from the multiple communication channels at a data processing unit, and process data fragments of the data streams in a time-sliced manner. The processing core logic can switch from processing a first data fragment of a first data stream to processing a first data fragment of a second data stream at an end of a time slice, wherein the time slice is determined by a fragment boundary associated with the data fragment of the first data stream. | 05-28-2015 |
20150150010 | METHOD OF EXECUTING ORDERED TRANSACTIONS IN MULTIPLE THREADS, COMPUTER FOR EXECUTING THE TRANSACTIONS, AND COMPUTER PROGRAM THEREFOR - Techniques to prevent a chain of or frequent occurrence of aborts when ordered transactions are executed in multiple threads. Executing ordered transactions in multiple threads with detection of occurrence of an abort in at least one of the transactions in the multiple threads and the barrier synchronization of at least two threads including a thread in which the abort is detected. | 05-28-2015 |
20150309834 | SYSTEM AND METHOD FOR SUPPORTING COMMON TRANSACTION IDENTIFIER (XID) OPTIMIZATION BASED ON RESOURCE MANAGER (RM) INSTANCE AWARENESS IN A TRANSACTIONAL ENVIRONMENT - A system and method can support transaction processing in a transactional environment. A coordinator for a global transaction operates to propagate a common transaction identifier and information for a resource manager instance to one or more participants of the global transaction in the transactional environment. The coordinator allows said one or more participants, which share resource manager instance with the coordinator, to use the common transaction identifier, and can process the global transaction for said one or more participants that share the resource manager instance using one transaction branch. | 10-29-2015 |
20150309835 | SYSTEM AND METHOD FOR SUPPORTING TRANSACTION AFFINITY BASED ON RESOURCE MANAGER (RM) INSTANCE AWARENESS IN A TRANSACTIONAL ENVIRONMENT - A system and method can support transaction processing in a transactional environment. A transactional system operates to route a request to a transactional server, wherein the transactional server is connected to a resource manager (RM) instance. Furthermore, the transactional system can assign an affinity context to the transactional server, wherein the affinity context indicates the RM instance that the transactional server is associated with, and the transactional system can route one or more subsequent requests that are related to the request to the transactional server based on the affinity context. | 10-29-2015 |
20150309837 | SYSTEM AND METHOD FOR SUPPORTING RESOURCE MANAGER (RM) INSTANCE AWARENESS IN A TRANSACTIONAL ENVIRONMENT - A system and method can support transaction processing in a transactional environment. A transactional server operates to receive resource manager (RM) instance information from a data source that is associated with one or more RM instances, wherein the received instance information allows the transactional server to be aware of which RM instance that the transactional server is currently connected to. Furthermore, the transactional server operates to save the received instance information into one or more tables that are associated with the transactional server. Then, the transactional server can process a global transaction based on the instance information saved in the one or more tables. | 10-29-2015 |
20150324222 | SYSTEM AND METHOD FOR ADAPTIVELY INTEGRATING A DATABASE STATE NOTIFICATION SERVICE WITH A DISTRIBUTED TRANSACTIONAL MIDDLEWARE MACHINE - A system and method can handle various database state notifications in a transactional middleware machine environment. The system can connect one or more transaction servers to a database service, wherein the database service is associated with a notification service. Furthermore, a notification service client that is associated with said one or more transaction servers can receive one or more events from the notification service, wherein said one or more events indicates one or more state changes in the database service. Then, one or more transaction servers operate to adaptively respond to the one or more state changes in the database service. | 11-12-2015 |
20150324223 | SYSTEM AND METHOD FOR PROVIDING SINGLE GROUP MULTIPLE BRANCHES BASED ON INSTANCE AWARENESS - A system and method can provide high throughput transactions in a transactional system. A system and method can, via a transaction manager, obtain information on a plurality of resource managers. The transaction manager can further manage a plurality of transaction branches, where each of the plurality of transaction branches can be associated with a different one of the plurality of resource managers. The methods and systems can associate a transaction identifier with each of the plurality of transaction branches, which can result in a plurality of transaction identifiers, where each of the plurality of transaction identifiers can include a branch identifier for each of the plurality of transaction branches. The methods and systems can perform one or more transactional operations on the plurality of transaction branches based on the different transaction identifiers. | 11-12-2015 |
20150347176 | TRANSACTION DIGEST GENERATION DURING NESTED TRANSACTIONAL EXECUTION - Generating a digest in a transactional memory environment for performing transactional executions, the transactional memory environment supporting transaction nesting is provided. Included is generating for a transaction, by a computer system, a computed digest based on the execution of at least one of a plurality of instructions of the transaction; based on beginning a nested transaction, executed within the transactional region of the transaction, saving a snapshot of the computed digest as a nesting level snapshot; beginning execution of the nested transaction: updating, by the computer system, the computed digest based on the execution of at least one of a plurality of instructions of the nested transaction; and based on an abort of the nested transaction, restoring the computed digest from the nesting level snapshot and restarting the nested transaction. | 12-03-2015 |
20150363231 | DATA VISUALIZATION AND ACCUMULATION DEVICE FOR CONTROLLING STEPS IN CONTINUOUS PROCESSING SYSTEM - A visualization device capable of visualizing the process flow on entire multiple item continuous processing system on a time series basis and investigating the cause of a system loss. A visualization device for managing an operating state in a first processing step to which a step loss retroacts or swears back in a multiple item continuous processing process in which a batch processing is performed item by item, wherein each of icons indicating various kinds of items, shows the progress of the operation in the processing step sequentially item by item on a time series basis in a matrix of cells wherein a vertical length of the matrix is divided by cells of operating hours and rows of the respective operating hours are partitioned by each one batch processing time of the first processing step. The data is accumulated in the matrix and the data is utilized while being visualized. | 12-17-2015 |
20150378776 | SCHEDULING IN A MULTICORE ARCHITECTURE - This invention relates to scheduling threads in a multicore processor. Executable transactions may be scheduled using at least one distribution queue, which lists executable transactions in order of eligibility for execution, and multilevel scheduler which comprises a plurality of linked individual executable transaction schedulers. Each of these includes a scheduling algorithm for determining the most eligible executable transaction for execution. The most eligible executable transaction is outputted from the multilevel scheduler to the at least one distribution queue. | 12-31-2015 |
20160004555 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - A data processing apparatus generates by a stream processing control program, for a time-series first stream data group of stream data out of a time-series stream data sequence, first vector data including elements acquired by collecting respective pieces of stream data of the time-series first stream data group; generates, by the stream processing control program, for a time-series second stream data group including, as a head, a piece of intermediate stream data of the time-series first stream data group and having the same number of pieces of data as the time-series first stream data group, second vector data including elements acquired by collecting respective pieces of stream data of the time-series second stream data group; and inputs, by the stream processing control program, the first and second vector data generated respectively to a batch program to control the batch program to carry out a batch processing. | 01-07-2016 |
20160004556 | DYNAMIC PREDICTION OF HARDWARE TRANSACTION RESOURCE REQUIREMENTS - A transactional memory system dynamically predicts the resource requirements of hardware transactions. A processor of the transactional memory system predicts resource requirements of a first hardware transaction to be executed based on any one of a resource hint and a previous execution of a prior hardware transaction. The processor allocates resources for the first hardware transaction based on the predicted resource requirements. The processor executes the first hardware transaction. The processor saves resource usage information of the first hardware transaction for future prediction. | 01-07-2016 |
20160004557 | ABORT REDUCING METHOD, ABORT REDUCING APPARATUS, AND ABORT REDUCING PROGRAM - A system and method for reducing the number of aborts caused by a runtime helper being called during the execution of a transaction block. When a runtime helper is called during the execution of a transaction block while a program using hardware transactional memory is running, the runtime helper passes ID information indicating the type of runtime helper to an abort handler. When there is an abort caused by a call to a runtime helper, the abort handler responds by acquiring the ID information of the runtime helper that caused the abort, disables the transaction block with respect to a specific type of runtime helper, executes the non-transactional path corresponding to the transaction block, and re-enables the transaction block when predetermined conditions are satisfied. | 01-07-2016 |
20160011901 | Dynamic Shard Allocation Adjustment | 01-14-2016 |
20160026495 | EVENT PROCESSING SYSTEMS AND METHODS - An event processing system includes a multi-agent based system, which includes a core engine configured to define and deploy a plurality of agents configured to perform a first set of programmable tasks defined by one or more users. The first set of tasks operates with real time data. The multi-agent based system also includes a monitoring engine configured to monitor a lifecycle of the agents, communication amongst the agents and processing time of the tasks. The multi-agent based system further includes a computing engine coupled to the core engine and configured to execute the first set of tasks. The event processing system includes a batch processing system configured to enable deployment of a second set of programmable tasks that operates with non-real time data and a studio coupled to the multi-agent based system and configured to enable users to manage the multi-agent based system and the batch processing system. | 01-28-2016 |
20160034301 | Identifying Performance Bottleneck of Transaction in Transaction Processing System - A mechanism is provided for identifying a performance bottleneck of a transaction in a transaction processing system. At a predefined time point, status information of an interaction between the transaction and a processing component among one or more processing components in the transaction processing system is collected. A duration of the interaction on the basis of the status information is determined. In response to the duration exceeding a predefined threshold, the interaction is identified as the performance bottleneck of the transaction in order to make changes to the transaction processing system thereby improving performance. | 02-04-2016 |
20160062790 | DESIGN ANALYSIS OF DATA INTEGRATION JOB - A request for analysis of a data integration job is received that includes one or more features and criteria for the analysis. Each feature is extracted from a job model representing the job by invoking a corresponding analytical rule for each feature. The analytical rule includes one or more operations and invoking the analytical rule performs the operations to analyze one or more job components associated with the corresponding feature as represented in the job model and to extract information pertaining to that feature. | 03-03-2016 |
20160070591 | DISTRIBUTED PROCESSING SYSTEM, DISTRIBUTED PROCESSING DEVICE, DISTRIBUTED PROCESSING METHOD, AND DISTRIBUTED PROCESSING PROGRAM - A distributed processing system in which a plurality of computers are interconnected, wherein each of the computers is provided with a module loader which loads each module and performs initialization processing, a metadata management unit which acquires metadata including a command for the initialization processing from a previously provided storage means or another computer, a file management unit which reads and writes a file within the storage means or the other computer, and an execution container which executes a distributed batch application. The file management unit examines whether or not an execution region including an execution code of a corresponding module is present in the storage means after the initialization processing, and when the execution region is not present, loads the execution code from the other computer and writes the loaded execution code as the execution region. | 03-10-2016 |
20160077867 | SYSTEM AND METHOD FOR SUPPORTING A SCALABLE CONCURRENT QUEUE IN A DISTRIBUTED DATA GRID - A scalable concurrent queue includes a central queue associated with multiple temporary queues for holding batches of nodes from multiple producers. When a producer thread or service performs an insertion operation on the scalable concurrent queue, the producer inserts one or more nodes into a batch in one of the multiple temporary queues associated with the central queue. Subsequently, the producer (or another producer) inserts the batch held in the temporary queue into the central queue. Contention between the multiple producers is reduced by providing multiple temporary queues into which the producers may insert nodes, and also by inserting nodes in the central queue in batches rather than one node at a time. The scalable concurrent queue scales to serve large number of producers with reduced contention thereby improving performance in a distributed data grid. | 03-17-2016 |
20160077868 | TRANSACTIONAL UPDATING IN DYNAMIC DISTRIBUTED WORKLOADS - A workload manager is operable with a distributed transaction processor having a plurality of processing regions and comprises: a transaction initiator region for initiating a transaction; a transaction router component for routing an initiated transaction to one of the plurality of processing regions; an affinity controller component for restricting transaction routing operations to maintain affinities; the affinity controller component characterised in comprising a unit of work affinity component operable with a resource manager at the one of the plurality of processing regions to activate an affinity responsive to completion of a recoverable data operation at the one of the plurality of processing regions. | 03-17-2016 |
20160092273 | SYSTEM AND METHOD FOR MANAGING THE ALLOCATING AND FREEING OF OBJECTS IN A MULTI-THREADED SYSTEM - A memory management system for managing objects which represent memory in a multi-threaded operating system extracts the ID of the home free-list from the object header to determine whether the object is remote and adds the object to a remote object list if the object is determined to be remote. The memory management system determines whether the number of objects on the remote object list exceeds a threshold. If the threshold is exceeded, the system batch-removes the objects on the remote object list and then adds those objects to the appropriate one or more remote home free-lists. | 03-31-2016 |
20160098293 | SYSTEM, METHOD, AND SOFTWARE FOR CONTROLLED INTERRUPTION OF BATCH JOB PROCESSING - This disclosure provides various embodiments of software, systems, and techniques for controlled interruption of batch job processing. In one instance, a tangible computer readable medium stores instructions for managing batch jobs, where the instructions are operable when executed by a processor to identify an interruption event associated with a batch job queue. The instructions trigger an interruption of an executing batch job within the job queue such that the executed portion of the job is marked by a restart point embedded within the executable code. The instructions then restart the interrupted batch job at the restart point. | 04-07-2016 |
20160103702 | LOW LATENCY ARCHITECTURE WITH DIRECTORY SERVICE FOR INTEGRATION OF TRANSACTIONAL DATA SYSTEM WITH ANALYTICAL DATA STRUCTURES - Low latency communication between a transactional system and analytic data store resources can be accomplished through a low latency key-value store with purpose-designed queues and status reporting channels. Posting by the transactional system to input queues and complementary posting by analytic system workers to output queues is described. On-demand production and splitting of analytic data stores requires significant elapsed processing time, so a separate process status reporting channel is described to which workers can periodically post their progress, thereby avoiding progress inquiries and interruptions of processing to generate report status. This arrangement produces low latency and reduced overhead for interactions between the transactional system and the analytic data store system. | 04-14-2016 |
20160103708 | SYSTEM AND METHOD FOR TASK EXECUTION IN DATA PROCESSING - System and method for executing one or more tasks in data processing is disclosed. Data is received from at least one channel from multiple channels. The data is received in order to generate a corresponding result. A set of tasks is generated. The set of tasks is generated to process the data so received. The tasks receive the data as an input argument for generating the corresponding result. A worker node from a plurality of worker node is selected for executing the set of task in a pipeline. An idle worker node from the plurality of worker node is selected for executing the set of tasks. The set of task is executed by the selected worker nodes in order to generate the corresponding result. The results are stored for a predefined time in the system. | 04-14-2016 |
20160110216 | SYSTEM AND METHOD FOR SUPPORTING TRANSACTION AFFINITY BASED REQUEST HANDLING IN A MIDDLEWARE ENVIRONMENT - A system and method can support transaction processing in a middleware environment. A processor, such as a remote method invocation stub in the middleware environment, can be associated with a transaction, wherein the transaction is from a first cluster. Then, the processor can handle a transactional request that is associated with the transaction, wherein the transactional request is to be sent to the first cluster. Furthermore, the processor can route the transactional request to a said cluster member in the first cluster, which is an existing participant of the transaction. | 04-21-2016 |
20160110218 | EFFICIENCY FOR COORDINATED START INTERPRETIVE EXECUTION EXIT FOR A MULTITHREADED PROCESSOR - A system and method of executing a plurality of threads, including a first thread and a set of remaining threads, on a computer processor core. The system and method includes determining that a start interpretive execution exit condition exists; determining that the computer processor core is within a grace period; and entering by the first thread a start interpretive execution exit sync loop without signaling to any of the set of remaining threads. In turn, the first thread remains in the start interpretive execution exit sync loop until the grace period expires or each of the remaining threads enters a corresponding start interpretive execution exit sync loop | 04-21-2016 |
20160117188 | INCREMENTAL PARALLEL PROCESSING OF DATA - One example method includes identifying synchronous code including instructions specifying a computing operation to be performed on a set of data; transforming the synchronous code into a pipeline application including one or more pipeline objects; identifying a first input data set on which to execute the pipeline application; executing the pipeline application on a first input data set to produce a first output data set; after executing the pipeline application on the first input data set, identifying a second input data set on which to execute the pipeline application; determining a set of differences between the first input data set and second input data set; and executing the pipeline application on the set of differences to produce a second output data set. | 04-28-2016 |
20160132357 | DATA STAGING MANAGEMENT SYSTEM - Batch job data staging combining synchronous/asynchronous staging. In pre-processing, a stage-in source file, and a target file for stage-out, in permanent storage, are identified using a batch script. From data amounts, time for stage-in/stage-out to/from temporary storage are estimated. Stage-in is based on the time, stage-out being asynchronous, and each asynchronous staging is classified short/long term depending on the time, each staging being recorded in a table. If a source file is modified, incremental staging is added to the table. With a staging list scheduling for batch jobs stage-in is performed, monitoring progress in the table, and resources may be allocated for the jobs nodes without waiting for stage-in to complete. The job generates results in the temporary storage, and using post-processing, stage-out transfers results to the target file in permanent storage. | 05-12-2016 |
20160147560 | Light-Weight Lifecycle Management of Enqueue Locks - In an example embodiment, a request for an enqueue lock for a first piece of data is received from a client application. At an enqueue server separate from an application server instance, a light-weight enqueue session is then created, including generating a light-weight enqueue session identification for the light-weight enqueue session. An enqueue lock for the first piece of data is stored in the light-weight enqueue session. The light-weight enqueue session identification is then sent to the client application. In response to a detection that a session between the client application and the application server instance has been terminated, all enqueue locks in the light-weight enqueue session are deleted and the light-weight enqueue session is deleted. | 05-26-2016 |
20160147561 | INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM - A job management unit manages a registered job by associating the job with an identifier specific to the job and reports a request to execute the job and the identifier associated with the job to a job execution unit on an execution date and time of the job; and the job execution unit manages a generation file of a next generation, which is created by executing the job, by associating the generation file with the identifier reported together with the request to execute the job from the job management unit; and if the generation file associated with the identifier reported together with the execution request already exists when executing the job based on the execution request from the job management unit, the job execution unit executes designated operation. | 05-26-2016 |
20160147562 | BATCH SCHEDULING - There is provided a method to schedule execution of a plurality of batch jobs by a computer system. The method includes: reading one or more constraints that constrain the execution of the plurality of batch jobs by the computer system and a current load on the computer system; grouping the plurality of batch jobs into at least one run frequency that includes at least one batch job; setting the at least one run frequency to a first run frequency; computing a load generated by each batch job in the first run frequency on the computer system based on each batch job's start time; and determining an optimized start time for each batch job in the first run frequency that meets the one or more constraints and that distributes each batch job's load on the computer system using each batch job's computed load and the current load. | 05-26-2016 |
20160162328 | SYNCHRONOUS BUSINESS PROCESS EXECUTION ENGINE FOR ACTION ORCHESTRATION IN A SINGLE EXECUTION TRANSACTION CONTEXT - An asynchronous business process specification declared in a procedural markup language comprising an activity flow model and a plurality of activities is received. An indication is received that a subset of the plurality of activities is to be synchronously executed without reduced latency. All process execution related objects are fetched once into a memory. The synchronous subset is executed in a single execution transaction context. | 06-09-2016 |
20160162329 | SOFTWARE ENABLED AND DISABLED COALESCING OF MEMORY TRANSACTIONS - A program controls coalescing of outermost memory transactions, the coalescing causing committing of memory store data to memory for a first transaction to be done at transaction execution (TX) end of a second transaction. Optimized machine instructions are generated based on an intermediate representation of a program, wherein either two atomic tasks are merged into a single coalesced transaction or are executed as separate transactions. | 06-09-2016 |
20160170812 | TECHNOLOGIES FOR EFFICIENT SYNCHRONIZATION BARRIERS WITH WORK STEALING SUPPORT | 06-16-2016 |
20160179573 | METHOD FOR PROVIDING MAINFRAME STYLE BATCH JOB PROCESSING ON A MODERN COMPUTER SYSTEM | 06-23-2016 |
20160188362 | LIBRARY APPARATUS FOR REAL-TIME PROCESS, AND TRANSMITTING AND RECEIVING METHOD THEREOF - A library transmission method for a real-time process in a client, which includes extracting a next target address (NextTargetAddress) from workflow data, and transmitting data to an agent of a library apparatus that corresponds to a corresponding target address. | 06-30-2016 |
20160253218 | METHOD AND APPARATUS FOR CONTROLLING POWER OUTPUT FROM ELECTRONIC DEVICE TO EXTERNAL ELECTRONIC DEVICE | 09-01-2016 |
20160378540 | MULTITHREADED TRANSACTIONS - Embodiments relate to multithreaded transactions. An aspect includes assigning a same transaction identifier (ID) corresponding to the multithreaded transaction to a plurality of threads of the multithreaded transaction, wherein the plurality of threads execute the multithreaded transaction in parallel. Another aspect includes determining one or more memory areas that are owned by the multithreaded transaction. Another aspect includes receiving a memory access request from a requester that is directed to a memory area that is owned by the transaction. Yet another aspect includes based on determining that the requester has a transaction ID that matches the transaction ID of the multithreaded transaction, performing the memory access request without aborting the multithreaded transaction. | 12-29-2016 |
20160378572 | OPTIMIZING THE INITIALIZATION OF A QUEUE VIA A BATCH OPERATION - A method, a computer program product, and a system for performing a batch processing are provided. The batch processing includes initializing a set of elements corresponding to a set of resources to produce an initialized group and chaining the initialized group to previously initialized elements to produce an element batch, when the previously initialized elements are available. The batch processing further includes setting a system lock on the set of resources after the element batch is produced; executing a service routine to move the element batch to a queue by referencing first and last elements of the element batch; and releasing the system lock on the set of resources once the service routine is complete. | 12-29-2016 |
20190146830 | TEMPLATE-DRIVEN MULTI-TENANT WORKFLOW PROCESSING | 05-16-2019 |
20190146831 | THREAD SWITCH FOR ACCESSES TO SLOW MEMORY | 05-16-2019 |