Entries |
Document | Title | Date |
20080209439 | Method for Carrying Out the Data Transfer Between Program Elements of a Process, Buffer Object for Carrying Out the Data Transfer, and Printing System - In a method or system for implementation of a transfer of data between two program elements of a process, a buffer object is provided between and linking two program elements. The buffer object comprises a buffer and control methods. A control method of a buffer object informs one of the linked program elements when the buffer is full or empty, the one link program element beginning with a reading of the data from the buffer or with a writing of the data into the buffer. The two linked program elements comprise one of the program elements writing the data into the buffer and the other program element reading data from the buffer, the program element writing the data driving the data transfer such that via the writing of the data into the buffer it causes the buffer object, via an informing of the program element reading the data, to call this program element and thereby trigger the reading of the data from the buffer. Alternatively, the two linked program elements comprise one of the program elements writing data into the buffer and the other program element reading data from the buffer, the program element reading the data driving the data transfer such that via the reading of the data from the buffer it causes the buffer object, via an informing of the program element writing the data, to call this program element and thereby trigger the writing of further data into the buffer. | 08-28-2008 |
20080222650 | Method And System For Recovering Stranded Outbound Messages - A method for recovering and requeueing lost messages is disclosed. The lost messages are intended for delivery from a first computer program to a second computer program but are instead stranded in locations internal to the first program. The method extracts one or more of these stranded messages from the location internal to the first program, determines the original destination of each stranded message and delivers that message to the second program. Delivery of each message to the second program is facilitated by using message queues provided by middleware type software programs. The desired middleware program can be selected by the user of the method, and the method provides for the necessary formatting of each recovered message according to the selected middleware. Absent use of the present method, these stranded messages would not be routed to their original destinations. | 09-11-2008 |
20080229329 | METHOD, APPARATUS AND COMPUTER PROGRAM FOR ADMINISTERING MESSAGES WHICH A CONSUMING APPLICATION FAILS TO PROCESS - Disclosed is a method for administering messages. In response to a determination that one or more consuming applications have failed to process the same message on a queue a predetermined number of times, the message is made unavailable to consuming applications. Responsive to determining that a predetermined number of messages have been made unavailable to consuming applications, one or more consuming applications are prevented from consuming messages from the queue. | 09-18-2008 |
20080263564 | SYSTEM AND METHOD FOR MESSAGE SERVICE WITH UNIT-OF-ORDER - The present invention enables “unit-of-order”, which allows a message producer to group messages into a single unit. It guarantees that messages are not only delivered to consumers in order, they are also are processed in order. The unit-of-order will be delivered to consumers as one unit and only one consumer will process messages from the unit at a time. The processing of a single message is complete when it is acknowledged, committed, recovered, or rolled back. Until message processing for a message is complete, the remaining unprocessed messages for that unit-of-order are blocked. | 10-23-2008 |
20080271050 | Alternately Processing Messages - Among other things, processing an incoming message stream includes storing context data of an application in a global database. Various messages from the incoming message stream are placed in an in-memory message queue. One of at least a first and a second phases at a first process is executed, and another of the at least first and second phases at a second process is also executed, so as to alternately execute a first phase and a second phase by a first process and a second process. The first phase includes processing at least one message from the various messages and storing at least one corresponding result in a local memory area. The first phase also includes storing al least one modification to the context data in the local memory area. The second phase includes performing a transaction of the at least one result and the at least one modification of the context data to the global database and committing the transaction. | 10-30-2008 |
20080282259 | DECLARATIVE CONCURRENCY AND COORDINATION SCHEME FOR SOFTWARE SERVICES - A method and system are provided for declaring concurrency of the execution of one or more processes. The processes may include messages and/or methods associated with services. Messages may post to a queue in a concurrency control. Instructions are executed responsive to the posting of the messages. Any number of instructions may be executed concurrently with an underlying state. The underlying state may further be declared by an attribute. The attribute may include a service handler and may further indicate a type of concurrency control, for example, concurrent, exclusive or teardown. | 11-13-2008 |
20080288960 | Shortcut in reliable communication - Methods and apparatus, including computer program products, are provided for messaging. In one aspect, there is provided a computer-implemented method. The method may include initiating a call from a first application to a second application. The method may determine whether the first application is local to the second application. A call may be made as a local call from the first application to the second application, when it is determined that the first and second applications are on the same computer. A call may be made as a remote call from the first application to the second application, when it is determined that the first and second applications are on separate computers. Related apparatus, systems, methods, and articles are also described. | 11-20-2008 |
20080301708 | SHARED STORAGE FOR MULTI-THREADED ORDERED QUEUES IN AN INTERCONNECT - In one embodiment, payload of multiple threads between intellectual property (IP) cores of an integrated circuit are transferred, by buffering the payload using a number of order queues. Each of the queues is guaranteed access to a minimum number of buffer entries that make up the queue. Each queue is assigned to a respective thread. A number of buffer entries that make up any queue is increased, above the minimum, by borrowing from a shared pool of unused buffer entries on a first-come, first-served basis. In another embodiment, an interconnect implements a content addressable memory (CAM) structure that is shared storage for a number of logical, multi-thread ordered queues that buffer requests and/or responses that are being routed between data processing elements coupled to the interconnect. Other embodiments are also described and claimed. | 12-04-2008 |
20080301709 | Queuing for thread pools using number of bytes - A method and apparatus for processing message is described. In one embodiment, an application programming interface is configured for receiving and sending messages. A building block layer is coupled to the application programming interface. A channel layer is coupled to the building block layer. A transport protocol stack is coupled to the channel layer for implementing properties specified by the channel layer. The transport protocol stack has a concurrent stack consisting of an out of band thread pool and a regular thread pool. The transport protocol layer is to process messages from each sender in parallel with the corresponding channel for each sender. | 12-04-2008 |
20080307431 | AUTOMATIC ADJUSTMENT OF TIME A CONSUMER WAITS TO ACCESS DATA FROM QUEUE DURING A WAITING PHASE AND TRANSMISSION PHASE AT THE QUEUE - Provided are a system and article of manufacture for automatic adjustment of time a consumer waits to access data from a queue during a waiting phase and transmission phase at the queue. A determination is made as to whether a queue is in a waiting phase or a transmission phase for data in response to waiting for a waiting phase wait time when initiating an operation to access data from the queue. During the waiting phase data is not available in the queue for reading. An incremental wait time is waited in response to determining that the queue is in the waiting phase. A determination is made as to whether the queue is in the waiting phase or the transmission phase in response to waiting the incremental wait time. A total wait time is recorded in response to determining that the queue is in the transmission phase. The at least one recorded total wait time is used to determine the waiting phase wait time to use when initiating the operation to access data from the queue. | 12-11-2008 |
20080320492 | APPARATUS AND METHODS USING INTELLIGENT WAKE MECHANISMS - An embodiment of the present invention provides a network interface card (NIC), comprising an intelligent wake mechanism and a device driver associated with the intelligent wake mechanism and configured to agree with embedded software on a set of wake codes and wake behaviors associated with the wake codes such that when the NIC encounters a wake event, the NIC first adds the wake code to a command queue, then it drives a PME pin to high to wake a device connected to the NIC. | 12-25-2008 |
20090025011 | INTER-PROCESS COMMUNICATION AT A MOBILE DEVICE - Communication between interfaces to remotely executed applications, i.e., Inter-Process Communication, may be enabled at a wireless device through the association of a message stored in an outbound message queue with an indication of a local interface to a remote application, to which interface the message is to be passed. A manager of the outbound message queue may determine whether a given outbound message is associated with a local interface to a remote application and, if so, may pass the given outbound message to an inbound processing module such that the message is received by the specified local interface to a remote application. | 01-22-2009 |
20090037931 | Method and Apparatus for a Dynamic and Real-Time Configurable Software Architecture for Manufacturing Personalization - A process receives a personalization request to personalize a communication device. Further, the process provides the personalization request to a message controller that composes a message having personalization information with a message composer engine according to a set of rules and configures one or more communication parameters for the message with a message flow control engine according to the set of rules. The set of rules indicates a distributed environment set of files that the message composer engine and the message flow control engine utilize in a distributed environment, and a centralized environment set of files that the message composer engine and the message flow control engine utilize in a centralized environment. | 02-05-2009 |
20090044200 | METHOD AND SYSTEM FOR ASYNCHRONOUS THREAD RESPONSE DETECTION IN AN ELECTRONIC COMMUNICATION SYSTEM - A system for asynchronous thread response detection in an electronic communication system. Alert messages are provided to a user who is in the process of responding to a message in a message thread. The alert messages are provided when there are new messages in the thread, including messages in thread that were received before the user started responding, and/or messages that were received while the user was composing the response. The disclosed system is advantageous in that it the responding user does not necessarily have to manually check for new thread messages prior to or while they are composing a response to a message within the thread. | 02-12-2009 |
20090049454 | Securing inter-process communication - A request to post a message to a destination is intercepted in an operating environment in which processes communicate via message queues. Message content and requester information associated with the request is evaluated to determine whether the message is to be posted. The message is posted to a message queue of the destination if the message is to be posted. | 02-19-2009 |
20090064182 | Systems and/or methods for providing feature-rich proprietary and standards-based triggers via a trigger subsystem - The example embodiments disclosed herein relate to application integration techniques and, more particularly, to application integration techniques built around the publish-and-subscribe model (or one of its variants). In certain example embodiments, triggers are provided for establishing subscriptions to publishable document types and for specifying the services that will process documents received by the subscription. A standards-based messaging protocol (e.g., JMS messaging) may be fully embedded as a peer to a proprietary messaging protocol provided to an integration server's trigger subsystem so that all or substantially all of the feature-rich capabilities available via the proprietary protocol may also become available via the standards-based messaging protocol. The triggers may be JMS triggers in certain example embodiments. | 03-05-2009 |
20090064183 | Secure Inter-Module Communication Mechanism - Methods, apparatuses, and systems directed to facilitating secure, structured interactions between code modules executing within the context of a document processed by a user agent, such as a browser client, that implements a domain security model. In a particular implementation, a module connector script or object loaded into a base document discovers listener modules and sender modules corresponding to different origins or domains, and passes information between them. In this manner, a listener module may consume and use information from a sender module located on the same page simply by having an end-user add both modules to a web page without having to explicitly define any form of interconnection. For example, a photo module may access a user account at a remote photo sharing site, and provide one or more photos to a module that renders the photographs in a slide show. | 03-05-2009 |
20090070779 | MINIMIZING MESSAGE FLOW WAIT TIME FOR MANAGEMENT USER EXITS IN A MESSAGE BROKER APPLICATION - A method for minimizing the message flow wait time for management user exits in a message broker application. A message broker application processes a request in a request and a response message flow. The request message flow generates a request identifier, collects information about the request message flow, and stores the request identifier and information in a global data map. The response message flow uses the request identifier to access the map and read the collected information without having to acquire a lock on the map. The response message flow also collects information about the response message flow, and generates management information about the request based on the information about the request message flow and the information about the response message flow. A dedicated clean up thread in the message broker application is used to remove used items from the global data map. | 03-12-2009 |
20090070780 | ENHANCED BROWSING OF MESSAGES IN A MESSAGE QUEUE - Arrangements for enhancing browsing of messages in a message queue are disclosed. Embodiments include hardware and/or software for tracking records browsed by one or more agents. The agents can collect, process, and/or re-format data for an upperware application, a data warehouse, and/or similar systems. When the agent sets up communications with a queue, the agent may generate an attribute setting that instructs the middleware to track the last record browsed and/or the next record to browse. In response to setting the attribute, an agent identification (AID)) can be utilized to record the current record number, row number, queue identifier, and/or the like in a database. When the agent re-establishes communication with the middleware queue, the middleware can retrieve the current record number utilizing the AID. | 03-12-2009 |
20090077567 | Adaptive Low Latency Receive Queues - A receive queue provided in a computer system holds work completion information and message data together. An InfiniBand hardware adapter sends a single CQE+ message data to the computer system that includes the completion Information and data. This information is sufficient for the computer system to receive and process the data message, thereby providing a highly scalable low latency receiving mechanism. | 03-19-2009 |
20090077568 | System and method for adjusting message hold time - A system and a method for adjusting message hold time to solve the problem that message hold time is invariable and cannot be changed are provided. The system adjusts the message hold time according to the difference in browsing speeds of a user in consecutive operations, so that the user may feel easy in operation after the message hold time is adjusted. | 03-19-2009 |
20090083761 | MULTIPLE AND MULTI-PART MESSAGE METHODS AND SYSTEMS FOR HANDLING ELECTRONIC MESSAGE CONTENT FOR ELECTRONIC COMMUNICATIONS DEVICES - Multiple and multi-part message methods and systems for handling electronic message content for electronic communications devices are presented. An exemplary method for handling electronic message content for an electronic communications device includes: receiving a first electronic message that includes default message content at the communications device; receiving a second electronic message that includes alternate message content at the communications device; determining at the communications device whether the first received message indicates availability of the alternate message content; and if the first received message indicates availability of the alternate message content, automatically providing the alternate message content of the second received message instead of the default message content of the first received message, in response to a user using the communications device to open the first received message indicating availability of the alternate message content or in response to the user using the communications device to open the second received message. | 03-26-2009 |
20090083762 | Dynamically mapping an action of a message - A system and method for dynamically mapping an action of a message is disclosed. The technology initially receives a first message generated by a first Service Oriented Architecture (SOA). The first message comprises an operation which is described within the message context of the first message. It is then determined that the operation corresponds to an action of a second SOA. A second message is then generated which is compatible with the second SOA. The second message comprises metadata which describes the action of the second SOA. | 03-26-2009 |
20090089797 | SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING COMPUTER CODE FOR MESSAGE FLOWS - Computer-executable code is automatically generated for a message flow in a message queuing infrastructure by determining a type of the message flow, inputting message flow parameters, and generating the computer-executable code based on the type of the message flow and the message flow parameters. The generation of code can also implement a design pattern, which is input based on the determined type of message flow. The computer-executable code can be, for example, Extended Structured Query Language (ESQL) code. The type of the message flow can identify, for example, a transformation requirement of the message flow. The transformation requirement can be, for example, one of (i) transformation from a first Extensible Markup Language (XML) message to a second XML message, (ii) transformation from an XML message to a Message Repository Manager (MRM) message, and (iii) transformation from a first MRM message to a second MRM message. | 04-02-2009 |
20090089798 | ELECTRONIC MAIL INBOX WITH FOCUSED E-MAILS ACCORDING TO CATEGORIES - Focusing electronic mail messages in a list of messages. Category information is received for classifying particular e-mail messages or senders of the messages in the list of e-mail messages according to a category. The method also includes setting a status data associated with each of the particular messages. The status data indicates the category classified by the user. A first instruction is received from the user for focusing the particular messages according to the category. The particular messages having the status data therewith in the list are focused collectively without altering a preexisting order of the messages in the list. | 04-02-2009 |
20090089799 | PROGRAMMABLE LOGIC CONTROLLER WITH QUEUE FUNCTION AND METHOD FOR THE SAME - A programmable logic controller (PLC) with queue function and method for the same receives a first input command from one of plurality of operation ends by a command receiving/sending unit. The command receiving/sending unit judges whether the PLC is processing a command at that moment. The processor processes the first input command and sends back reply when there is no command under processing. When the processor is processing the first input command, the command-receiving unit places a second input command into a command queue and gives a priority setting to the second input command. When the PLC finishes its current task, the PLC further processes a command in the queue with highest processing priority. Therefore, the PLC processors can process every command from the operation ends in sequential manner. | 04-02-2009 |
20090113446 | METHOD FOR CREATING ADAPTIVE DISTRIBUTIONS - A method to dynamically create an adaptive distribution list through an application of a combination of mathematical, logical and/or programmable operations to existing static distribution lists or user directories. This list is created as part of the information message sent to the entries on the distribution list. In this invention, the user or sender does not need to interface with the Group creation modification tool. Another feature of the invention is that the newly created distribution lists can be temporarily or permanently saved as designed by the sender. This invention eliminates the need to separately create distribution and then send messages to the entries on the distribution list. | 04-30-2009 |
20090113447 | CLIENT-SIDE SELECTION OF A SERVER - A method, system, and computer program product for performing network device management and client load distribution to a number of the Common Information Model Object Manager (CIMOM) servers via a network path. A client-side server selection (CSS) utility allows a client to choose the ideal server to fulfill a CIM request message. The client transmits the CIM request message to the CIMOM server based on service response time information utilized by the CSS utility. The CIM request message is forwarded to a CIM provider for processing. The provider returns a CIM response message to the CIMOM and a service response time is generated. Thereafter, the CIMOM returns the CIM response message to the client. At a preset time period, a Service Location Protocol (SLP) advertise generation facility initiates a multicast of the service response time information (from all network CIMOM servers) to the CSS utility. | 04-30-2009 |
20090113448 | SATISFYING A REQUEST FOR AN ACTION IN A VIRTUAL WORLD - A method for satisfying a request for an action in a virtual world includes permitting a user to request a first action for an avatar in the virtual world, wherein the avatar corresponds to the user. The method may also include determining if the first action is unavailable for the user's avatar at the time of the request. The method may additionally include permitting a user's avatar to perform another action while the first action is unavailable for the user's avatar. The method may yet additionally include determining if the first action becomes available for the user's avatar. The method may further include notifying the user that the first action is available for the user's avatar in response to the first action being determined to be available. The method may yet further include allowing the user to accept the first action. And the method may include allowing the user's avatar to perform the first action in response to the user accepting the first action. | 04-30-2009 |
20090113449 | METHOD AND SYSTEM FOR CREATING AND PROCESSING DYNAMIC PROXY ACTIONS FOR ACTIONS THAT ARE NOT YET REGISTERED WITH A CLIENT SIDE BROKER - A system using proxy actions to handle requests for actions that are not yet registered with a broker. When an action request is received and the action is not registered in the broker, a proxy action object is created and stored on a proxy action queue. Proxy action objects stored on the queue are read periodically and a check is made as to whether the actions they refer to have been registered yet. If an action for a queued proxy action object has been registered, the action request represented by the proxy action object delivered to the appropriate service provider component. If a proxy action object remains on the proxy action queue without the corresponding action being registered before a corresponding proxy action queue element lifetime timer expires, the proxy action object is removed from the proxy action queue without the action being performed. | 04-30-2009 |
20090119680 | SYSTEM AND ARTICLE OF MANUFACTURE FOR DUPLICATE MESSAGE ELIMINATION DURING RECOVERY WHEN MULTIPLE THREADS ARE DELIVERING MESSAGES FROM A MESSAGE STORE TO A DESTINATION QUEUE - Provided are a system and article of manufacture for duplicate message elimination during recovery when multiple threads are delivering messages from a message store to a destination queue. A plurality of message threads process operations to deliver messages from a message store to a destination queue, wherein one message thread processes one message. An in-doubt list is generated identifying messages that are in-progress of being delivered form the message store to the destination queue by the message threads. One message thread processing one message adds an entry including the message identifier and the thread identifier to a monitor queue. The message thread further adds the message to the destination queue. A recovery thread is generated in response to detecting a failure in the processing by the threads to deliver the messages from the message store to the destination queue. The recovery thread processes the messages indicated in the in-doubt list and compares with message identifiers in the monitor queue to prevent duplicate delivery of messages to the destination queue. | 05-07-2009 |
20090125915 | System and method for displaying movable message block - A system and a method for displaying movable message blocks are provided, which are applicable for solving the problem that a user-friendly message block display interface cannot be provided when a user logs in a message website. By means of moving message blocks to be displayed in a first display layer and a second display layer and determining whether received operation identification information and received trigger event are consistent with one of parameters of the message blocks, the message blocks consistent with the operation identification information and the trigger event are moved to the first display layer, and the other message blocks are moved to the second display layer, thereby achieving the efficacies of browsing and operating the message blocks on the message website more conveniently and visually. | 05-14-2009 |
20090133036 | COORDINATING RESOURCES USING A VOLATILE NETWORK INTERMEDIARY - The present invention extends to methods, systems, and computer program products for coordinating resources using a volatile network intermediary. Embodiments provide a mechanism for an network intermediary to facilitate a state coordination pattern between an application and a communication medium when the communication medium does not support the state coordination pattern. In some embodiments, receiving applications can make use of this network intermediary by changing the receive location. However, the receiving application may not be able to distinguish the network intermediary from a native implementation of the state coordination pattern. Further, the network intermediary does not require deployment of a persistent or durable store to coordinate state between receiving applications and the original communication medium. | 05-21-2009 |
20090133037 | COORDINATING APPLICATION STATE AND COMMUNICATION MEDIUM STATE - The present invention extends to methods, systems, and computer program products for coordinating application sate and communication mediums state. Embodiments of present invention provide a mechanism for a communication medium to provide a view of message content for a message (a peek) to an application along with the communication medium preventing further access to the message (a lock) until the application signals back how to handle the message. Thus, the communication medium indicates that the message is locked for the duration of processing at the application. Indicating that the message is locked significantly reduces the chance of the message being provided to another application (or another consumer of the same application) during the time the application is processing the view of message content. | 05-21-2009 |
20090133038 | DISTRIBUTED MESSAGING SYSTEM WITH CONFIGURABLE ASSURANCES - The present invention extends to methods, systems, and computer program products for configuring assurances within distributed messaging systems. A defined set of message log and cursor components are configurably activatable and deactivatable to compose a variety of different capture assurances, transfer assurances, and delivery assurances within a distributed messaging system. A composition of a capture assurance, a transfer assurance, and a delivery assurance can provide an end-to-end assurance for a messaging system. End-to-end assurances can include one of best effort, at-most-once, at-least-once, and exactly once and can include one of: durable or non-durable. Using a defined set of activatable and deactivatable message log and cursor components facilities more efficient transitions between desired assurances. In some embodiments, a composition of a capture assurance, a transfer assurance, and a delivery assurance provides durable exactly-once message delivery. | 05-21-2009 |
20090133039 | DURABLE EXACTLY ONCE MESSAGE DELIVERY AT SCALE - The present invention extends to methods, systems, and computer program products for durable exactly once message delivery at scale. A message capture system uses a synchronous capture channel and transactions to provide durable exactly once message capture. Messages are sent from the message capture system to a message delivery system over a network using an at least once transfer protocol. The message delivery system implements a durable at most once messaging behavior, the combination of which results in durable exactly once transfer of messages from the message capture system to the message delivery system. The message delivery system uses a synchronous delivery channel and transactions to provide durable exactly once message delivery. Cursors maintaining message consumer state are collocated with message consumers, freeing up message log resources to process increased volumes of messages, such as, for example, in a queued or pub/sub environment. | 05-21-2009 |
20090165020 | COMMAND QUEUING FOR NEXT OPERATIONS OF MEMORY DEVICES - Systems and/or methods that facilitate transferring data between a processor component and memory components are presented. A transfer controller component facilitates controlling data transfers in part by receiving respective subsets of data from respective memory components and arranging the respective subsets of data based in part on a desired predefined data order. The processor component generates a transfer map that includes information to facilitate arranging data in a predefined order. The processor component generates respective subsets of commands that are provided to queue components in respective memory components to retrieve desired data from the respective memory components. Each memory component services the commands in its queue component in an independent and parallel manner, and transfers the data retrieved from memory to the transfer controller component, which can arrange the received data in a predefined order for transfer to the processor component. | 06-25-2009 |
20090165021 | Model-Based Composite Application Platform - Embodiments provide an architecture to enable composite, autonomous composite applications and services to be built and deployed. In addition, an infrastructure is provided to enable communication between and amongst distributed applications and services. In one or more embodiments, an example architecture includes or otherwise leverages five logical modules including connectivity services, process services, identity services, lifecycle services and tools. | 06-25-2009 |
20090172695 | Service Bus Architecture - In embodiments, an implementation of a service oriented architecture is provided including an application service bus capable of approximating point-to-point performance by reducing the format transformation of application messages by way of relaying them in a native format when the message format of a consumer application and/or service provider application is supported by the service bus. Preferably, the service bus is capable of supporting multiple message formats and transport protocols and comprises a plurality of components including a Service Initiator module, a Service Terminus module, a Service Locator module, and a Transport module. The service bus provides logical isolation between a consumer application and a provider application by exposing a set of interfaces for relaying service request and service response messages between the applications. | 07-02-2009 |
20090193431 | PROCESSING OF MTOM MESSAGES - A method and system for processing MTOM messages comprising a root document and one or more binary attachments referenced by the root document, in a Web service requester or provider. When an inbound MTOM message is received, a pipeline comprising a plurality of message handlers is selected to process the received message. The message is unpackaged by separating the binary attachments from the root document, and the pipeline properties are checked to determine if conversion of the message is required by at least one message handler. Responsive to the result of the determination, either conversion of the message is carried out, by encoding the binary data in each of the attachments and replacing each reference in the root document to a binary attachment with the encoded data for that attachment, and processing the converted message by the pipeline, or the root document and binary attachments are processed by the pipeline. | 07-30-2009 |
20090199207 | PRIORITY MESSAGING AND PRIORITY SCHEDULING - Systems and methods that set priority levels to messaging systems initiated between end points (e.g., two SQL point services) thru service brokers. A priority component can apply priority at a session level to add priority capabilities on top of service brokers, and enable setting priority for all the messages in a session or conversation. Such priority can further affect the order in which messages from different conversations are sent and the order in which they are received. | 08-06-2009 |
20090199208 | QUEUED MESSAGE DISPATCH - Embodiments described herein allow a service component author to write service components without having to handle incoming messages being received at any time. This may be facilitated by a message dispatch engine that dispatches messages from the incoming message queue only when the destination service component has indicated that it is ready to receive the message having that context. If the service component is not yet ready for the message, the message dispatch component may lock the message at least until the destination service component indicates that it is now ready to receive the message. Until that time, the message dispatch engine may ignore the locked message when finding messages to dispatch. | 08-06-2009 |
20090199209 | Mechanism for Guaranteeing Delivery of Multi-Packet GSM Message - A target task ensures complete delivery of a global shared memory (GSM) message from an originating task to the target task. The target task's HFI receives a first of multiple GSM packets generated from a single GSM message sent from the originating task. The HFI logic assigns a sequence number and corresponding tuple to track receipt of the complete GSM message. The sequence number is unique relative to other sequence numbers assigned to GSM messages that have not been completely received from the initiating task. The HFI updates a count value within the tuple, which comprises the sequence number and the count value for the first GSM packet and for each subsequent GSM packet received for the GSM message. The HFI determines when receipt of the GSM message is complete by comparing the count value with a count total retrieved from the packet header. | 08-06-2009 |
20090204974 | METHOD AND SYSTEM OF PREVENTING SILENT DATA CORRUPTION - A method and system of avoiding silent data corruption in a request-response messaging system where a requester relies on tags to match request messages with response messages. The silent data corruption occurring if the requester process a response message after a tag used with the response message was reused with another request message. | 08-13-2009 |
20090204975 | PERFORMANCE INDICATOR FOR MEASURING RESPONSIVENESS OF USER INTERFACE APPLICATIONS TO USER INPUT - A method for measuring application responsiveness measures the time elapsed between receiving and processing a trailing tag message inserted into the application's message queue. The method receives a message, generates a trailing tag message associated with the message, and inserts the trailing tag message into the application's message queue. The trailing tag message includes a timestamp indicating when the trailing tag message was queued. A default message handler calculates the time elapsed between when the trailing tag message was queued and when the trailing tag message was processed. The elapsed time may then be used to calculated system responsiveness. | 08-13-2009 |
20090217294 | SINGLE PROGRAM CALL MESSAGE RETRIEVAL - Embodiments of the present invention provide a method, system and computer program product for single program code message retrieval for message queues. In an embodiment of the invention, a message queue data processing system can be configured for single program code message retrieval for message queues. The system can include a message queue executing in a host server and providing an API to applications communicatively coupled to the message queue over a computer communications network. The API exposed by the message queue can include a single program call including program code enabled to open a queuing resource in the message queue, to retrieve all messages in a message buffer from the queuing resource and to close the queuing resource. | 08-27-2009 |
20090222838 | TECHNIQUES FOR DYNAMIC CONTACT INFORMATION - Techniques involving contact information are disclosed. For example, an apparatus may include a contact entry generation module and an entry updating module. The contact entry generation module creates a contact entry having a location-specific information field. The entry updating module obtains an update for the location-specific information field from a remote device. This update corresponds to a current location. | 09-03-2009 |
20090235278 | Method for tracking and/or verifying message passing in a simulation environment - A message tracking and verifying system for verifying the correctness of messages being passed may comprise a tracking module for tracking a request message and a verifying module for verifying a response message. The tracking module may be configured to store a calculated source address and a calculated response address range. The verifying module may be configured to obtain an actual source address from the response message and an actual response address range for the response message. The correctness of the response message is determined based on the comparison of the calculated source address with the actual source address and the comparison of the calculated response address range with the actual response address range. | 09-17-2009 |
20090249356 | LOCK-FREE CIRCULAR QUEUE IN A MULTIPROCESSING SYSTEM - Lock-free circular queues relying only on atomic aligned read/write accesses in multiprocessing systems are disclosed. In one embodiment, when comparison between a queue tail index and each queue head index indicates that there is sufficient room available in a circular queue for at least one more queue entry, a single producer thread is permitted to perform an atomic aligned write operation to the circular queue and then to update the queue tail index. Otherwise an enqueue access for the single producer thread would be denied. When a comparison between the queue tail index and a particular queue head index indicates that the circular queue contains at least one valid queue entry, a corresponding consumer thread may be permitted to perform an atomic aligned read operation from the circular queue and then to update that particular queue head index. Otherwise a dequeue access for the corresponding consumer thread would be denied. | 10-01-2009 |
20090249357 | SYSTEMS AND METHODS FOR INTER PROCESS COMMUNICATION BASED ON QUEUES - A method of data communication between a first virtual machine and a second virtual machine is disclosed. The second virtual machine is executing in a record/replay mode. The method includes copying data from the first virtual machine to a first queue. The first queue is configured to receive the data from the first virtual machine. The first queue has a first queue header section and a first queue data section. The first queue header being write protected and configured to store a tail pointer of the data in the first queue. The tail pointer is updated in the first header section. This update of the tail pointer causes a page fault. The method further includes handling page fault through a page fault handler. The handling includes copying the data from the first queue to a second queue. The second queue being configured to receive a copy of the data and to allow the second virtual machine to access the copy of the data. | 10-01-2009 |
20090254920 | Extended dynamic optimization of connection establishment and message progress processing in a multi-fabric message passing interface implementation - In one embodiment, the present invention includes a system that can optimize message passing by, at least in part, automatically determining a minimum number of fabrics and virtual channels to be activated to handle pending connection requests and data transfer requests, and preventing processing of new connection requests and data transfer requests outside of a predetermined communication pattern. Other embodiments are described and claimed. | 10-08-2009 |
20090282419 | Ordered And Unordered Network-Addressed Message Control With Embedded DMA Commands For A Network On Chip - Data processing on a network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, network interface controllers, and network-addressed message controllers, with each IP block adapted to a router through a memory communications controller, a network-addressed message controller, and a network interface controller, where each memory communications controller controlling communications between an IP block and memory, each network interface controller controlling inter-IP block communications through routers, with each IP block also adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox. | 11-12-2009 |
20090282420 | APPLICATION LINKAGE CONTROL APPARATUS AND APPLICATION LINKAGE CONTROL METHOD - When a message is transmitted from a storing application of a process requesting server, a message queuing server stores the message in a queue. When storing the message in the queue, the message queuing server transmits information regarding this message to an extracting application of a process performing server, thereby controlling a linkage operation between the storing application of the process requesting server and the extracting application of the process performing server. | 11-12-2009 |
20090288101 | SERVICE EXCEPTION RESOLUTION FRAMEWORK - A service exception resolution framework provides a centralized exception handling console (EHC) used to reprocess unfulfilled service requests that have result in service request exceptions. The EHC allows an operator to analyze multiple service request exceptions simultaneously from disparate applications and domains. The framework greatly reduces the time, cost, and resource expenditures needed to analyze and resolve service request exceptions and reprocess service requests regardless of the applications and domains from which the service request exceptions result. | 11-19-2009 |
20090300652 | QUEUE DISPATCH USING DEFERRED ACKNOWLEDGEMENT - Dispatching an incoming message from a queue into message transfer session(s) from which message consumers may draw messages. The message is reversibly received from the queue, whereupon a context of a message is identified. If the context correlates to an existing message transfer session, the message is ultimately assigned to a message transfer session. If the context does not correlate to an existing message transfer session, a new message transfer session is created, and the message is assigned to that new message transfer session. Upon receiving an acknowledgement of successful processing of the message, the removal of the message from the queue-like communication medium is assured. Upon receiving an acknowledgement of unsuccessful processing of the message, the message is restored to the queue-like communication medium. | 12-03-2009 |
20090307714 | NETWORK ON CHIP WITH AN I/O ACCELERATOR - Data processing on a network on chip (‘NOC’) that includes IP blocks, routers, memory communications controllers, and network interface controllers; each IP block adapted to a router through a memory communications controller and a network interface controller; each memory communications controller controlling communication between an IP block and memory; each network interface controller controlling inter-IP block communications through routers; each IP block adapted to the network by a low latency, high bandwidth application messaging interconnect comprising an inbox and an outbox; a computer software application segmented into stages, each stage comprising a flexibly configurable module of computer program instructions identified by a stage ID with each stage executing on a thread of execution on an IP block; and at least one of the IP blocks comprising an input/output (‘I/O’) accelerator that administers at least some data communications traffic to and from the at least one IP block. | 12-10-2009 |
20090313637 | METHOD AND SYSTEM FOR PREFERENTIAL REPLY ROUTING - A method for preferential reply routing, the method includes: receiving a request-reply message from a requesting application; detecting there is a preferred partition of a reply queue managed locally to an application server to which the requesting application is connected; qualifying a name of a reply queue stored in the request-reply message so that the name refers to the local partition that is managed locally to the application server; determining whether the local partition is available; wherein in the event the local partition is available: storing a reply message in the local partition; and retrieving the reply message from the local partition in response to the requesting application. | 12-17-2009 |
20090313638 | Correlated message identifiers for events - A message identifier of a first event is provided to a correlation engine. The correlation engine is to correlate the first event to one or more second events according to a predetermined correlation technique. The message identifiers of the second events are received from the correlation engine. A correlated message identified for the first event is generated based on the message identifier of the first event and on the message identifiers of the second events. The correlated message identifier for the first event is output. | 12-17-2009 |
20090320044 | Peek and Lock Using Queue Partitioning - A queue management system may store a queue of messages in a main queue. When a message is processed by an application, the message may be moved to a subqueue. In the subqueue, the message may be locked from other applications. After processing the message, the application may delete the message from the subqueue and complete the action required. If the application fails to respond in a timely manner, the message may be moved from the subqueue to the main queue and released for another application to service the message. If the application responds after the time out period, a fault may occur when the application attempts to delete the message from the subqueue. Such an arrangement allows a “peek and lock” functionality to be implemented using a subqueue. | 12-24-2009 |
20100031272 | SYSTEM AND METHOD FOR LOOSE ORDERING WRITE COMPLETION FOR PCI EXPRESS - A method for managing the protocol of read/write messages in a PCI Express communication link is disclosed. The method comprises maintaining queues of write requests and read requests associated with each of a plurality of request identifications that are contained in a message header, wherein the read requests associated with a request identification are held in abeyance until such time that write requests associated with the request identification are completed. | 02-04-2010 |
20100058357 | SCOPING AN ALIAS TO A SUBSET OF QUEUE PARTITIONS - A method of performing message operations includes receiving a message operation request identifying a queue, retrieving a list of the subset of partitions associated with the alias received in the request, and selecting at least one of the partitions within the retrieved subset. According to the method, the queue includes a plurality of partitions, the request identifies the queue with an alias, and the alias having a subset of the plurality of partitions associated therewith. | 03-04-2010 |
20100077406 | SYSTEM AND METHOD FOR PARALLELIZED REPLAY OF AN NVRAM LOG IN A STORAGE APPLIANCE - A system and method for operating a storage system is provided. A plurality of operating system transaction entries are stored in a log, and a swarm of messages with respect to the plurality of operating system transaction entries is established. The swarm of messages is delivered to an operating system of the storage system. A processor performs a parallel retrieval process for a plurality of messages in the swarm of messages by processing the plurality of messages in an arbitrary order without regard to an underlying order of the messages. | 03-25-2010 |
20100083278 | METHOD AND SYSTEM FOR AUTOMATICALLY GENERATING MESSAGE QUEUE SCRIPTS - The present invention provides a method, system and computer program product for automatically generating message queue scripts for defining one or more Websphere® Message Queue™ (WMQ) objects on one or more queue managers. A user provides parameters corresponding to the WMQ objects as input in an input parameter file. The parameters include the name of the WMQ objects and the queue managers. Further, a message queue environment consistency check is performed on the input parameter file for validating the parameters provided. The validation is performed by using a database that stores information about the message queue environment. After successful validation of the input parameter file, one or more message queue scripts are generated for defining the WMQ objects on the queue managers. Fallback scripts may also be generated for rolling back the modifications performed on the queue managers, if required at a later stage. | 04-01-2010 |
20100095307 | SELF-SYNCHRONIZING HARDWARE/SOFTWARE INTERFACE FOR MULTIMEDIA SOC DESIGN - A forced lock-step operation between a CPU (software) and the hardware is eliminated by unburdening the CPU from monitoring the hardware until it is finished with its task. This is done by providing a data/control message queue into which the CPU writes combined data/control messages and places an End tag into the queue when finished. The hardware checks the content of the message queue and starts decoding the incoming data. The hardware processes the data read from the message queue and the processed data is then written back into the message queue for use by the software. The hardware raises an interrupt signal to the CPU when reaching the End tag. Speed differences between hardware and software can be compensated for by changing the depth of the queue. | 04-15-2010 |
20100095308 | MANAGING QUEUES IN AN ASYNCHRONOUS MESSAGING SYSTEM - Managing an asynchronous messaging queue with a client computer in an asynchronous messaging system, where the client computer is programmed to store an manage the asynchronous messaging queue, includes receiving a reactive message in the asynchronous messaging queue, the reactive message including an identification of a referenced message and an action to be performed on the referenced message; and performing the action on the previously initiated message with the client computer if the referenced message is present in the asynchronous messaging queue. | 04-15-2010 |
20100100890 | PROVIDING SUPPLEMENTAL SEMANTICS TO A TRANSACTIONAL QUEUE MANAGER - In one embodiment, a computer system instantiates a queue manager configured to process a plurality of existing queue manager commands on messages in a message queue. The computer system instantiates a virtualized instance of a queue manager in a virtual layer associated with the queue manager in the computing system. The a virtualized queue manager instance provides supplemental queue manager commands usable in addition to existing queue manager commands, such that the queue manager can be used to implement the supplemental commands without substantial modification. The computer system receives an indication that a message in a message queue is to be accessed according to a specified command provided by the instantiated virtualized queue manager instance that is not natively supported by the queue manager and the virtualized queue manager performs the specified supplemental command as indicated by the received indication by performing one or more existing queue manager commands. | 04-22-2010 |
20100107176 | MAINTENANCE OF MESSAGE SERIALIZATION IN MULTI-QUEUE MESSAGING ENVIRONMENTS - Messages may be provided to a source queue in serialized order, each message associated with a serialization context. The messages may be buffered in the source queue until a transmission time is reached, in turn, for each buffered message. Transmission-ready messages may be sent from the source queue according to the serialized order, using the serialization context, while continuing to store existing messages that are not yet transmission-ready. A queue assignment of the serialization context may be changed to a target queue. Subsequent messages may be provided with the serialization context to the target queue for buffering therein, while remaining transmission-ready messages may be continued to be sent from the source queue. All of the existing messages from the source queue associated with the serialization context may be determined to have been sent, and the subsequent messages may begin to be sent from the target queue in serialized order, using the serialization context. | 04-29-2010 |
20100107177 | DISPATCH MECHANISM FOR COORDINATING APPLICATION AND COMMUNICATION MEDIUM STATE - The present invention extends to methods, systems, and computer program products for coordinating application state and communication medium state. Embodiments of the invention provide mechanisms by which a dispatcher can enable application code to coordinate changes in application state with the consumption of messages from a communication medium. The coordination can be automatic where the dispatcher performs the coordination, or manual, where the coordination is performed more expressly by application code. Embodiments also include mechanisms by which applications targeting an execution (e.g., continuation based) runtime may compose alternative state transitions in the application with a peek lock protocol. | 04-29-2010 |
20100122268 | COMMUNICATOR-BASED TOKEN/BUFFER MANAGEMENT FOR EAGER PROTOCOL SUPPORT IN COLLECTIVE COMMUNICATION OPERATIONS - A method, system, method and computer program product for facilitating collective communication in parallel computing. A system for parallel computing includes one or more communicators. Each of the one or more communicators comprises a plurality of processes. A memory pool including one or more early arrival buffers is provided. One or more tokens are assigned to a specified communicator included in the communicators. Each of the processes comprised by the specified communicator may consume any token assigned to the specified communicator. Requesting an early arrival buffer included in the memory pool requires consuming at least one token. A collective communication operation is performed using the specified communicator. The collective communication operation is performed eagerly using early arrival buffers obtained by consuming the tokens assigned to the communicator. | 05-13-2010 |
20100162265 | System-On-A-Chip Employing A Network Of Nodes That Utilize Logical Channels And Logical Mux Channels For Communicating Messages Therebetween - An integrated circuit with an array of nodes linked by an on-chip communication network. Messages are communicated between nodes utilizing logical channels representing hardware resources at the associated nodes. A given logical channel is associated with a receiver node and a transmitter node. A set of logical channels are associated with a logical mux channel. The nodes are adapted to carry out operations utilizing a given logical mux channel associated therewith in order to identify a logical channel that is associated with the given logical mux channel and that has a predetermined ready state. In the preferred embodiment, the operations are invoked by a calling thread that is blocked in the event that no logical channel associated with the given logical mux channel has a predetermined ready state. The calling thread is then reactivated in the event that at least one logical channel associated with the given logical mux channel transitions to the predetermined ready state. Preferably, the nodes include a recirculation queue and logic that stores event messages in the recirculation queue. Each given event message provides an indication that an identified logical channel associated with an identified logical mux channel has transitioned to the predetermined ready state. The logic processes the recirculation queue to reactivate calling threads in accordance with the event messages stored therein. The operations temporarily remove the identified logical channel from the given logical mux channel such that the identified local channel behaves like an independent logic channel. The operations that identify the logical channel associated with the given logical mux channel are fair between all logical channels that are associated with the given logical mux channel. | 06-24-2010 |
20100192161 | Lock Free Queue - A first in, first out queue uses a sequence of arrays to store elements in the queue. The arrays are constructed using a lock free queue, and within each array, a lock free mechanism may be used to enqueue and dequeue elements. Many embodiments may use atomic operations to ensure successful placement of elements in the queue, as well as remove elements from the queue. The atomic operations may be used within a loop until successful. | 07-29-2010 |
20100205612 | METHOD AND APPARATUS FOR PROCESSING PROTOCOL MESSAGES FOR MULTIPLE PROTOCOL INSTANCES - The invention includes a method and apparatus for processing protocol messages for multiple protocol instances. In one embodiment, a method for processing protocol messages includes receiving a plurality of messages for a plurality of processors where each received message is associated with one of the protocol instances, generating a processing request for each message, queuing the processing requests, and servicing the queues in a manner for arbitrating access by the queues to the processors for processing the messages. A processing request generated for a received message identifies one of the protocol instances with which the message is associated. The processing requests are queued using a plurality of queues associated with the respective plurality of protocol instances, where each processing request is queued in one of the queues associated with the one of the protocol instances with which the processing request is associated. The servicing of each queue includes reading a processing request from the queue, if the queue has at least one processing request queued therein, and causing the one of the processors with which the processing request is associated to process one of the messages associated with the protocol instance identified by the processing request. The queues may be serviced in a round-robin manner for arbitrating access by the queues to the processors, thereby enabling atomic processing of the messages. | 08-12-2010 |
20100229182 | LOG INFORMATION ISSUING DEVICE, LOG INFORMATION ISSUING METHOD, AND PROGRAM - A log information issuing device includes a priority information manager in which priority information is stored, a priority of a log message being defined in the priority information, a message queue that has a plurality of queues for storing the log message according to the priority, a message sorting processor that refers to the priority information to store the log message in the message queue, an internal resource information collector that determines a load state of an internal resource from operating information on the internal resource, a log message queue processor that takes out the log message from the message queue, the log obtaining condition defining a condition of the log message is taken out from the message queue according to the load state, and a log processor supplies the log message as the log information, the log message being taken out by the log message queue processor. | 09-09-2010 |
20100251262 | Systems and/or methods for standards-based messaging - The example embodiments disclosed herein relate to application integration techniques built around the publish-and-subscribe model (or one of its variants). In certain example embodiments, a first standards-based messaging protocol (e.g., the JMS messaging protocol) may be used to create a trigger so that a message envelope according to a second standards-based messaging protocol (e.g., SOAP) may be communicated over the first standards-based messaging transport layer. In other words, in certain example embodiments, a trigger according to a first protocol (e.g., JMS) may have a message according to a second protocol (e.g., SOAP) associated therewith so as to enable the message to be communicated over the first protocol's transport layer. The trigger may be configured to receive a message from a web service consumer via the JMS messaging protocol and pass it to the web service stack for dispatch to the web service provider. Similarly, for a request-reply web service exchange pattern, the trigger may be configured to send the reply message from the web service provider, as returned by the web service layer, to the web service consumer via the JMS messaging protocol. | 09-30-2010 |
20100251263 | MONITORING OF DISTRIBUTED APPLICATIONS - Methods, systems, and computer-readable media are disclosed for monitoring a distributed application. A particular method identifies a plurality of components of a distributed application. The method also identifies a specific technology associated with a particular component and attaches a technology specific interceptor to the particular component based on the identified specific technology. The method includes intercepting messages that are sent by or received by the particular component using the technology specific interceptor. At least one potential work item is generated based on the intercepted messages. The method includes determining whether to schedule the at least one potential work item for execution based on a predicted impact of the at least one work potential item on performance of the distributed application. | 09-30-2010 |
20100281491 | PUBLISHER FLOW CONTROL AND BOUNDED GUARANTEED DELIVERY FOR MESSAGE QUEUES - Techniques for managing messages in computer systems are provided. In one embodiment, in response to a publisher attempting to enqueue a message in a queue, a determination is made whether a condition is satisfied. The condition is based on the current usage of the queue by the publisher. Based on whether the condition is satisfied, a decision is made whether to enqueue the message in the queue. The decision whether to enqueue the message may comprise restricting the publisher from enqueueing any more messages in the queue until the same or a different condition is satisfied. | 11-04-2010 |
20100287564 | Providing Access Control For a Destination in a Messaging System - Providing controlled access for a destination in a messaging system includes: selecting a destination for storing messages in a messaging system, one or more of the messages comprising one or more message properties; associating each of a set of message requestors with a set of message selectors; and in response to an access request for the destination from a message requestor, determining the set of said message selectors associated with the message requestor and using the identified set of message selectors to check against messages on the destination comprising a corresponding set of message properties for providing a response to the access request. | 11-11-2010 |
20100287565 | METHOD FOR MANAGING REQUESTS ASSOCIATED WITH A MESSAGE DESTINATION - A method, apparatus and/or computer program product manage a request for a message destination. A request to create a new temporary destination at a receiving computer is intercepted, and generation of the new temporary destination is suppressed. A pre-defined destination that is operable to store the message instead of the new temporary destination is selected. An identifier, which is assigned to the new temporary destination, is associated with the pre-defined destination. | 11-11-2010 |
20100306786 | SYSTEMS AND METHODS FOR NOTIFYING LISTENERS OF EVENTS - In one embodiment, systems and methods are provided for tracking events wherein an event system monitors certain areas of a system. When an event occurs in one area of the system, the event system notifies the processes listening to that area of the system of the event. | 12-02-2010 |
20100325640 | QUEUEING MESSAGES RELATED BY AFFINITY SET - In a messaging and queuing system that supports a cluster of logically associated messaging servers for controlling queues of messages, messages are processed. In response to an application program command to a first messaging server, a queue is opened, the queue having multiple instances on further messaging servers of the cluster. Responding to first messaging server putting messages on the queue, messages are distributed among the multiple instances of the queue on their respective messaging servers so as to balance. For the first message of an affinity set, access information for the particular queue instance to which it is put is obtained and stored. The access information may be used in order to send the further message to the particular queue instance and, if said further message is not part of the affinity set, it is put to an instance of the queue as determined by said predetermined rules. | 12-23-2010 |
20110029988 | METHODS AND APPARATUS FOR FACILITATING APPLICATION INTER-COMMUNICATIONS - A method and apparatus for facilitating communication amongst a plurality of applications associated with at least one device is provided. The method may comprise receiving, by an extension module, a request from a first application to communicate with one or more applications, establishing a communication link between the first and at least one of the one or more applications, wherein the communication link allows the first and the at least one of the one or more applications to communicate at least one of data or control information, and storing, by the extension module, at least a portion of data communicated between the communicating applications. | 02-03-2011 |
20110035757 | SYSTEM AND METHOD FOR MANAGEMENT OF JOBS IN A CLUSTER ENVIRONMENT - A system and method for management of jobs in the clustered environment is provided. Each node in the cluster executes a job manager that interfaces with a replicated database to enable cluster wide management of jobs within the cluster. Jobs are queued in the replicated database and retrieved by a job manager for execution. Each job manager ensures that jobs are processed through completion or, failing that, are requeued on another storage system for execution. | 02-10-2011 |
20110041138 | SYSTEM AND METHOD OF PRESENTING ENTITIES OF STANDARD APPLICATIONS IN WIRELESS DEVICES - A method of presenting data entities of standard device applications in wireless devices is provided. Component-based applications are hosted on a wireless device providing an application runtime environment for hosting at least one component-based application. Component definitions are hosted for developing the component-based application. A standard data component implements a standard data component definition; the standard data component definition is embedded into the component-based application definition during development. The standard data component providing access to a standard device data entity by invoking device dependent APIs the standard data component presenting the standard device data entity as a user defined data component. The application runtime environment automatically making functionality available of the user defined data components available to standard data component. | 02-17-2011 |
20110061062 | COMMUNICATION AMONG EXECUTION THREADS OF AT LEAST ONE ELECTRONIC DEVICE - A method of communication in at least one electronic device is presented. In the method, a first execution thread and a second execution thread are created in the at least one electronic device. Also created is a message service for receiving messages for the first thread. A message to be transferred from the second thread to the message service of the first thread is generated. Outside of control by either the first or second threads, one of multiple data transfer mechanisms is selected for transferring the message from the second thread to the message service of the first thread based on a relationship between the first and second threads. This relationship may be one in which the first and second threads are executing within a single process, within different processes of the same device, or within different devices. The message is transferred to the message service of the first thread using the selected data transfer mechanism and processed in the first thread. | 03-10-2011 |
20110067036 | Method for Determining Relationship Data Associated with Application Programs - A method for determining relationship data associated with application programs in a messaging system, comprising the steps of: responsive to at least one first message event sending a message from a first application to a first destination and at least one second message event retrieving, by a second application, the message from a second destination, intercepting message data associated with the message; analysing the intercepted message data in accordance with one or more rules in order to find one or more message parameters; and in response to finding the one or more message parameters, identifying the first message event and identifying the second message event, determining a relationship associated with the first application and the second application. An apparatus and computer program element for determining such relationship data are also provided. | 03-17-2011 |
20110067037 | System And Method For Processing Message Threads - A system and method for processing message threads is provided. A plurality of messages, each comprising a message body, is grouped by conversation thread. The message bodies of the messages are compared. Each message recursively contained in at least one other message is identified as a near duplicate message. An attachment sequence is generated for at least part of each attachment associated with one or more of the messages. The attachment sequences associated with the near duplicate messages are compared. Each near duplicate message having an attachment sequence not matching the attachment sequence of any other near duplicate message is identified as a unique message. | 03-17-2011 |
20110099557 | DISTRIBUTED CONTROL OF DEVICES USING DISCRETE DEVICE INTERFACES OVER SINGLE SHARED INPUT/OUTPUT - Systems and methods are provided for controlling a device. In one aspect, a method for controlling a device includes exposing a plurality of virtual device interfaces ( | 04-28-2011 |
20110107350 | MESSAGE ORDERING USING DYNAMICALLY UPDATED SELECTORS - A method of queuing messages for communications between a first computer program and a second computer program, comprises: placing a plurality of messages in a queue, wherein each message has a message body; placing selector information on each message, wherein the selector information contains information as to which message is to be processed next; and using the selector information on a message to identify a next message for processing. | 05-05-2011 |
20110138400 | AUTOMATED MERGER OF LOGICALLY ASSOCIATED MESSAGES IN A MESSAGE QUEUE - Embodiments of the invention provide a method, system and computer program product for message merging in a messaging queue. In an embodiment of the invention, a method for message merging in a messaging queue can be provided. The method can include receiving a request to add a new message to a message queue in a message queue manager executing in memory by a processor of a host computing platform. The method can also include a merge indicator to stipulate whether or not a merge should take place. The method also can include identifying an association key associating the new message with an existing message in the message queue and locating an associated message in the message queue corresponding to the identified association key. Finally, the method can include merging the new message with the located associated message in the message queue. | 06-09-2011 |
20110145836 | Cloud Computing Monitoring and Management System - A cloud computing monitoring system has an alert capturing system and a message transfer system that provides performance tracking and alert management to a local monitoring system. The alert capturing system may operate as part of a managed code framework and may capture and route alerts that may be transmitted to an operating system, as well as application exceptions and debugging information. A message queuing system may transmit the alerts to a local monitoring system, which may have a connector that subscribes to the cloud system's message queuing system. | 06-16-2011 |
20110173637 | MANAGING PRIVATE USE OF PROGRAM EXECUTION CAPACITY - Techniques are described for managing execution of programs, including using excess program execution capacity of one or more computing systems. For example, a private pool of excess computing capacity may be maintained for a user based on unused dedicated program execution capacity allocated for that user, with the private pool of excess capacity being available for priority use by that user. Such private excess capacity pools may further in some embodiments be provided in addition to a general, non-private excess computing capacity pool that is available for use by multiple users, optionally including users who are associated with the private excess capacity pools. In some such situations, excess computing capacity may be made available to execute programs on a temporary basis, such that the programs executing using the excess capacity may be terminated at any time if other preferred use for the excess capacity arises. | 07-14-2011 |
20110258637 | SYSTEMS AND METHODS FOR CONDUCTING COMMUNICATIONS AMONG COMPONENTS OF MULTIDOMAIN INDUSTRIAL AUTOMATION SYSTEM - An improved industrial automation system and communication system for implementation therein, and related methods of operation, are described herein. In at least some embodiments, the improved communication system allows communication in the form of messages between modules in different control or enterprise domains. Further, in at least some embodiments, such communications are achieved by providing a communication system including a manufacturing service bus having two internal service busses with a bridge between the internal busses. Also, in at least some embodiments, a methodology of synchronous messaging is employed. | 10-20-2011 |
20110258638 | DISTRIBUTED PROCESSING OF BINARY OBJECTS VIA MESSAGE QUEUES INCLUDING A FAILOVER SAFEGUARD - A system and method for distributing processing utilizing message queues as a method of distributing binary objects as “messages” and invoking the embedded logic of the received message to perform a portion of a distributed application is disclosed. More particularly, but not by way of limitation, a system and method for the integrated distribution and execution of objects as they are retrieved or extracted from a message queue on a remote system to provide executable functionality portions of a distributed application. In one embodiment a failed processing step results in the message being retained in the message queue to allow for subsequent retry processing. | 10-20-2011 |
20110265098 | Message Passing with Queues and Channels - In an embodiment, a reception thread receives a source node identifier, a type, and a data pointer from an application and, in response, creates a receive request. If the source node identifier specifies a source node, the reception thread adds the receive request to a fast-post queue. If a message received from a network does not match a receive request on a posted queue, a polling thread adds a receive request that represents the message to an unexpected queue. If the fast-post queue contains the receive request, the polling thread removes the receive request from the fast-post queue. If the receive request that was removed from the fast-post queue does not match the receive request on the unexpected queue, the polling thread adds the receive request that was removed from the fast-post queue to the posted queue. The reception thread and the polling thread execute asynchronously from each other. | 10-27-2011 |
20110265099 | EVENT QUEUE MANAGING DEVICE AND EVENT QUEUE MANAGING METHOD - An event queue managing module that prevents unnecessary events from continuously executing applications when an application execution environment resumes from a suspended state, and including: a queue managing unit for storing event objects reported from an event detector of a basic software unit into an event queue in order of occurrence of events and managing their queue; an event classification detection unit for detecting the event classification and parameter of the event objects whose queue is managed by the queue managing unit; a stop state detection unit for detecting a stop state of an application execution environment; and an event deletion unit for deleting an unnecessary event from the event objects stored in the event queue when the application execution environment is in the stop state. | 10-27-2011 |
20110276983 | AUTOMATIC RETURN TO SYNCHRONIZATION CONTEXT FOR ASYNCHRONOUS COMPUTATIONS - Architecture that includes an asynchronous library which remembers the synchronization context that initiated an asynchronous method call and when the request is completed, the library restores the synchronization context of the calling thread before executing a callback. This ensures that the callback executes on the same thread as the original asynchronous request. The callback to the asynchronous operation that asynchronous library provides automatically “jumps threads” to maintain thread affinity. | 11-10-2011 |
20110296437 | METHOD AND APPARATUS FOR LOCKLESS COMMUNICATION BETWEEN CORES IN A MULTI-CORE PROCESSOR - A lockless processor core communication capability is provided herein. The lockless communication capability enables lockless communication between cores of a multi-core processor. Lockless communication between a first core and a second core of a multi-core processor is provided using a message queuing mechanism. The message queuing mechanism includes a message queue, a first bitmap, and a second bitmap. The message queue includes a plurality of messages configured for storing data queued by the first core for processing by the second core. The first bitmap includes a plurality of bit positions associated with respective messages of the message queue, and is configured for use by the first core to indicate availability of respective queued message data. The second bitmap includes a plurality of bit positions associated with the respective messages of the message queue, and is configured for use by the second core to acknowledge availability of the respective queued message data and to indicate reception of the respective queued message data. | 12-01-2011 |
20120005688 | ALLOCATING SPACE IN MESSAGE QUEUE FOR HETEROGENEOUS MESSAGES - Allocating space for storing heterogeneous messages in a message queue according to message classification. The classification may comprise message type, application type, network type, and so forth. Messages of multiple classification values may be queued in a single queue, referred to as a primary queue. When the allocated portion of the primary queue is reached for a particular message classification, then subsequent messages having that classification are sent to a secondary queue for queuing. The secondary queue also allocates space according to message classification. When space for a particular message classification becomes available in the primary queue, one or more messages having that classification may be moved from the secondary queue to the primary queue. | 01-05-2012 |
20120047518 | SYSTEM FOR PRESERVING MESSAGE ORDER - The order of messages in an asynchronous message system is preserved, by generating a message and tagging the generated message with a sequence identifier and a sequence number. The order of messages is further preserved by processing the tagged message by checking a log to determine whether the sequence identifier is in the log, sending the tagged message to a selected consumer if the sequence identifier is not in the log and sending the tagged message to a particular consumer if the sequence identifier is in the log. Still further, the order of messages is preserved by writing an entry to the log having the sequence identifier and the sequence number of the tagged message and a consumer identifier of the selected consumer if the sequence identifier of the tagged message is not in the log. | 02-23-2012 |
20120079505 | PERFORMING COMPUTATIONS IN A DISTRIBUTED INFRASTRUCTURE - The present invention extends to methods, systems, and computer program products for performing computations in a distributed infrastructure. Embodiments of the invention include a general purpose distributed computation infrastructure that can be used to perform efficient (in-memory), scalable, failure-resilient, atomic, flow-controlled, long-running state-less and state-full distributed computations. Guarantees provided by a distributed computation infrastructure can build upon existent guarantees of an underlying distributed fabric in order to hide the complexities of fault-tolerance, enable large scale highly available processing, allow for efficient resource utilization, and facilitate generic development of stateful and stateless computations. A distributed computation infrastructure can also provide a substrate on which existent distributed computation models can be enhanced to become failure-resilient. | 03-29-2012 |
20120096475 | ORDERED PROCESSING OF GROUPS OF MESSAGES - A highly parallel, asynchronous data flow processing system in which processing is represented by a directed graph model, can include processing nodes that generate, and process, groups of dependent messages and that process messages within such groups in order. Other messages can be processed in whatever order they are received by a processing node. To identify a group of dependent messages, message identifiers are applied to a message. Processing of a message may generate child messages. A child message is assigned a message identifier that incorporates the associated message identifier of the parent message. The message identifier of the parent message is annotated to indicate the number of related child messages. When a group of messages is to be processed by a processing node in order, the processing node maintains a buffer in which messages in the group are stored. When a message is received, its message identifier indicates whether it is in a group, its parent node, if any, and the number of child nodes it has if it is a parent node. From this information, it can be determined whether all messages within the group have been received. When all of the messages within the group have been received, the processing node can process the messages in order. | 04-19-2012 |
20120137305 | DEVICES AS SERVICES IN A DECENTRALIZED OPERATING SYSTEM - Various embodiments of the present invention transform devices into Web services or special-purpose servers that are capable of communicating with personal computers. Various embodiments of the present invention allow various low-level aspects of device drivers to reside in the devices, eliminating the need for the devices to be compatible with legacy specification. Various embodiments of the present invention allow various devices to be shipped from the factory with low-level software already built in so that users are liberated from having to deal with the experience of installing and upgrading device drivers. In various embodiments of the present invention, each device is preferably a network node identifiable by a Uniform Resource Identifier (URI). | 05-31-2012 |
20120144404 | PROVIDING INVOCATION CONTEXT TO IMS SERVICE PROVIDER APPLICATIONS - A computer implemented method invokes a business application in response to receipt of a request Simple Object Access Protocol (SOAP) message. The request SOAP message requests an operation that is defined in a Web Services Description Language (WSDL) service. To implement the operations defined in the WSDL service, the WSDL service is provided as input to a tool that generates a business application which corresponds to the supplied WSDL service. The SOAP BODY from the request SOAP message is converted into an unformatted data structure for inputting to the business application, while information from the SOAP HEADER is retained in order to generate a reply SOAP message that contains execution results. | 06-07-2012 |
20120151498 | PROGRAMMATIC MODIFICATION OF A MESSAGE FLOW DURING RUNTIME - A message flow within a message broker can be identified. The message flow can include nodes and connections. The nodes can include a reflective node, a pre-defined node and a user-defined node. The message broker can be an intermediary computer program code able to translate a message from a first formal messaging protocol to a second formal messaging protocol. The code can be stored within a computer readable medium. The reflective node within the message flow can be selected. The reflective node can be associated with an external resource which can be an executable code. The external resource can be executed which can result in the modifying of the structure of the message flow. The modification can occur during runtime. The modification can include node and/or connection adding, altering, and deleting. | 06-14-2012 |
20120159513 | MESSAGE PASSING IN A CLUSTER-ON-CHIP COMPUTING ENVIRONMENT - Technologies pertaining to cluster-on-chip computing environments are described herein. More particularly, mechanisms for supporting message passing in such environments are described herein, where cluster-on-chip computing environments do not support hardware cache coherency. | 06-21-2012 |
20120159514 | CONDITIONAL DEFERRED QUEUING - Conditional deferred queuing may be provided. Upon receiving a message, one or more throttle conditions associated with the message may be identified. A lock associated with the throttle condition may be created on the message until the throttle condition is satisfied. Then, the lock on the message may be removed and the message may be delivered. | 06-21-2012 |
20120167116 | AUTOMATED MERGER OF LOGICALLY ASSOCIATED MESSGAGES IN A MESSAGE QUEUE - Embodiments of the invention provide a method, system and computer program product for message merging in a messaging queue. In an embodiment of the invention, a method for message merging in a messaging queue can be provided. The method can include receiving a request to add a new message to a message queue in a message queue manager executing in memory by a processor of a host computing platform. The method can also include a merge indicator to stipulate whether or not a merge should take place. The method also can include identifying an association key associating the new message with an existing message in the message queue and locating an associated message in the message queue corresponding to the identified association key. Finally, the method can include merging the new message with the located associated message in the message queue. | 06-28-2012 |
20120192205 | APPLICATION OF SYSTEM LEVEL POLICY IN MESSAGE ORIENTED MIDDLEWARE - One or more policies to be applied to a set of one or more messages in a message oriented middleware are defined. Metrics of the message oriented middleware are monitored. Application of a policy in response to a trigger condition being satisfied is initiated. Application of the policy applies actions across the set of one or more messages. | 07-26-2012 |
20120204190 | Merging Result from a Parser in a Network Processor with Result from an External Coprocessor - A mechanism is provided for merging in a network processor results from a parser and results from an external coprocessor providing processing support requested by said parser. The mechanism enqueues in a result queue both parser results needing to be merged with a coprocessor result and parser results which have no need to be merged with a coprocessor result. An additional queue is used to enqueue the addresses of the result queue where the parser results are stored. The result from the coprocessor is received in a simple response register. The coprocessor result is read by the result queue management logic from the response register and merged to the corresponding incomplete parser result read in the result queue at the address enqueued in the additional queue. | 08-09-2012 |
20120210334 | COMMUNICATION DEVICE AND METHOD FOR COHERENT UPDATING OF COLLATED MESSAGE LISTINGS - A device, system and method are provided for presenting message threads in a device display where messages may have a persistent or intermediate status. A list of message threads is displayed, collated according to a given message thread attribute, is displayed. When a new message is detected belonging to one of the message threads, if the message has a persistent status it is added to the message thread and the collating message thread attribute for that thread is updated. If the message has an intermediate status, it may be added to the message thread but the collating message thread attribute for that message is deferred until the intermediate status is changed to a persistent status. The collated list of message threads is then updated. By deferring updates to the collating message thread attribute when a message has an intermediate status, disruption to the order of the collated list is mitigated. | 08-16-2012 |
20120216216 | METHOD AND MIDDLEWARE FOR EFFICIENT MESSAGING ON CLUSTERS OF MULTI-CORE PROCESSORS - Disclosed embodiments include a Java messaging method for efficient inter-node and intra-node communications on computer systems with multi-core processors interconnected via high-speed network interconnections. According to one embodiment, the Java messaging method accesses the high-speed networks and memory more directly and reduces message buffering. Additionally, intra-node communications utilize shared memory transfers within the same Java Virtual Machine. The described Java messaging method does not compromise Java portability and is both user and application transparent. | 08-23-2012 |
20120272248 | MANAGING QUEUES IN AN ASYNCHRONOUS MESSAGING SYSTEM - A method of managing an asynchronous messaging queue with a client computer in an asynchronous messaging system, where the client computer is programmed to store an manage the asynchronous messaging queue, includes receiving a reactive message in the asynchronous messaging queue, the reactive message including an identification of a previously initiated message and an action to be performed on the previously initiated message; and upon determining that the previously initiated message has already been received in the asynchronous messaging queue, performing the action on the previously initiated message with the client computer. | 10-25-2012 |
20120291046 | Method and System for Recovering Stranded Outbound Messages - A method for recovering and requeueing lost messages is disclosed. The lost messages are intended for delivery from a first computer program to a second computer program but are instead stranded in locations internal to the first program. The method extracts one or more of these stranded messages from the location internal to the first program, determines the original destination of each stranded message and delivers that message to the second program. Delivery of each message to the second program is facilitated by using message queues provided by middleware type software programs. The desired middleware program can be selected by the user of the method, and the method provides for the necessary formatting of each recovered message according to the selected middleware. Absent use of the present method, these stranded messages would not be routed to their original destinations. | 11-15-2012 |
20120317587 | Pattern Matching Process Scheduler in Message Passing Environment - Processes in a message passing system may be unblocked when messages having data patterns match data patterns of a function on a receiving process. When the match occurs, the process may be added to a runnable queue, and in some embodiments, may be raised to the top of a runnable queue. When a match does not occur, the process may remain in a blocked or non-executing state. In some embodiments, a blocked process may be placed in an idle queue and may not be executed until a process scheduler determines that a message has been received that fulfills a function waiting for input. When the message fulfills the function, the process may be moved to a runnable queue. | 12-13-2012 |
20120317588 | METHOD AND MESSAGE HANDLING HARDWARE STRUCTURE FOR VIRTUALIZATION AND ISOLATION OF PARTITIONS - A computer-based method configures a hardware circuit to transfer a message to a message queue in an operating system. The hardware circuit is used to transfer a message to the message queue in the operating system without requiring use of either the operating system or a hypervisor associated with the operating system. The using the hardware circuit uses a logical identifier associated with the message to select an entry in a mapping table of the hardware circuit. A value in the entry in the mapping table is used to select an entry in an action table. The entry in the action table is used to determine a tail pointer for the message queue. The hardware circuit appends the message to a location indicted by the tail pointer without requiring cycles of a hypervisor associated with the strand. | 12-13-2012 |
20120331482 | Apparatus and Systems For Measuring, Monitoring, Tracking and Simulating Enterprise Communications and Processes - The present invention comprises apparatus and systems for measuring, monitoring, tracking and simulating enterprise communications and processes. A central message repository or database is constructed, comprised of monitoring messages sent from process messaging systems. The database may then be accessed or queried as desired. A simulation tool assists in reviewing present and proposed processes and sub-processes before modifying existent systems or creating new systems. | 12-27-2012 |
20130007767 | AUTOMATED GENERATION OF SERVICE DEFINITIONS FOR MESSAGE QUEUE APPLICATION CLIENTS - A method, system, and computer program product fo automatically generating service definitions for application clients of a message broker is provided. The method includes retrieving a trace of interactions between different application instances and corresponding message queues in a message brokering system. Thereafter, messages in the trace can be analyzed to identify the application instances and related message exchange data. Finally, a service definition document can be generated for each identified application instance using the related message exchange data to describe computational services provided by the identified application instance. | 01-03-2013 |
20130014127 | SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING COMPUTER CODE FOR MESSAGE FLOWS - Computer-executable code is automatically generated for a message flow in a message queuing infrastructure by determining a type of the message flow, inputting message flow parameters, and generating the computer-executable code based on the type of the message flow and the message flow parameters. The generation of code can also implement a design pattern, which is input based on the determined type of message flow. The computer-executable code can be, for example, Extended Structured Query Language (ESQL) code. The type of the message flow can identify, for example, a transformation requirement of the message flow. The transformation requirement can be, for example, one of (i) transformation from a first Extensible Markup Language (XML) message to a second XML message, (ii) transformation from an XML message to a Message Repository Manager (MRM) message, and (iii) transformation from a first MRM message to a second MRM message. | 01-10-2013 |
20130055287 | MODIFYING APPLICATION BEHAVIOUR - A data processing system comprising: an operating system providing an application programming interface; an application supported by the operating system and operable to make calls to the application programming interface; an intercept library configured to intercept calls of a predetermined set of call types made by the application to the application programming interface; and a configuration data structure defining at least one action to be performed for each of a plurality of sequences of one or more calls having predefined characteristics, the one or more calls being of the predetermined set of call types; wherein the intercept library is configured to, on intercepting a sequence of one or more calls defined in the configuration data structure, perform the corresponding action(s) defined by the configuration data structure. | 02-28-2013 |
20130061247 | PROCESSOR TO MESSAGE-BASED NETWORK INTERFACE USING SPECULATIVE TECHNIQUES - Methods and systems are provided for a message network interface unit (a message interface unit), coupled to a processor, that is used for allowing the processor to send messages to a hardware unit. Methods and systems are also provided for a message interface unit, coupled to a processor, that is used for allowing a processor to receive messages from a hardware unit. The message network interface unit described herein may allow for the implementation data-intensive, real time applications, which require a substantially low message response latency and a substantially high message throughput. | 03-07-2013 |
20130067489 | Power Efficient Callback Patterns - In one or more embodiments, an application program interface (API) is provided and enables an entity, such as an application, script, or other computing object to register to receive callbacks immediately and, without specifying a time constraint. In this approach, the API does not rely on a timer, such as a system timer. Rather, a non-timer based queue, such as a message queue-type approach is utilized. Specifically, callbacks that are registered through this API can be placed on the message queue and work associated with the registered callback can be performed through the normal course of processing messages and events in the message queue. Over time, such results in a callback pattern that allows an associated web browser and applications such as web applications to remain responsive, while increasing performance and power efficiencies. | 03-14-2013 |
20130067490 | MANAGING PROCESSES WITHIN SUSPEND STATES AND EXECUTION STATES - One or more techniques and/or systems are provided for suspending logically related processes associated with an application, determining whether to resume a suspended process based upon one or more wake policies, and/or managing an application state of an application, such as timer and/or system message data. That is, logically related processes associated with an application, such as child processes, may be identified and suspended based upon logical relationships between the processes (e.g., a logical container hierarchy may be traversed to identify logically related processes). A suspended process may be resumed based upon a set of wake policies. For example, a suspended process may be resumed based upon an inter-process communication call policy that may be triggered by an application attempting to communicate with the suspended process. Application data may be managed while an application is suspended so that the application may be resumed in a current and/or relevant state. | 03-14-2013 |
20130081060 | System and Method for Efficient Concurrent Queue Implementation - A method, system, and medium are disclosed for facilitating communication between multiple concurrent threads of execution using an efficient concurrent queue. The efficient concurrent queue provides an insert function usable by producer threads to insert messages concurrently. The queue also includes a consume function usable by consumer threads to read the messages from the queue concurrently. The consume function is configured to guarantee a per-producer ordering, such that, for any producer, messages inserted by the producer are read only once and in the order in which the producer inserted those messages. | 03-28-2013 |
20130081061 | Multi-Lane Concurrent Bag for Facilitating Inter-Thread Communication - A method, system, and medium are disclosed for facilitating communication between multiple concurrent threads of execution using a multi-lane concurrent bag. The bag comprises a plurality of independently-accessible concurrent intermediaries (lanes) that are each configured to store data elements. The bag provides an insert function executable to insert a given data element into the bag by selecting one of the intermediaries and inserting the data element into the selected intermediary. The bag also provides a consume function executable to consume a data element from the bag by choosing one of the intermediaries and consuming (removing and returning) a data element stored in the chosen intermediary. The bag guarantees that execution of the consume function consumes a data element if the bag is non-empty and permits multiple threads to execute the insert or consume functions concurrently. | 03-28-2013 |
20130081062 | Scalable, Parallel Processing of Messages While Enforcing Custom Sequencing Criteria - Scalable, parallel (i.e., concurrent) processing of messages is provided from a message queue, while at the same time enforcing sequencing within a stream. Dependencies among messages can therefore be respected. The criteria for determining which messages form a stream are not required to be known to the message dispatcher, which receives a stream name and determines whether another message in that named stream is already being processed. If so, the dispatcher determines whether the invoker should wait temporarily, or should be given a different message that was previously blocked and has now become available for processing, or should be instructed to retrieve a different message from the message queue. | 03-28-2013 |
20130081063 | Scalable, Parallel Processing of Messages While Enforcing Custom Sequencing Criteria - Scalable, parallel (i.e., concurrent) processing of messages is provided from a message queue, while at the same time enforcing sequencing within a stream. Dependencies among messages can therefore be respected. The criteria for determining which messages form a stream are not required to be known to the message dispatcher, which receives a stream name and determines whether another message in that named stream is already being processed. If so, the dispatcher determines whether the invoker should wait temporarily, or should be given a different message that was previously blocked and has now become available for processing, or should be instructed to retrieve a different message from the message queue. | 03-28-2013 |
20130111499 | APPLICATION ACCESS TO LDAP SERVICES THROUGH A GENERIC LDAP INTERFACE INTEGRATING A MESSAGE QUEUE | 05-02-2013 |
20130111500 | MESSAGE QUEUING APPLICATION ACCESS TO SPECIFIC API SERVICES THROUGH A GENERIC API INTERFACE INTERGRATING A MESSAGE QUEUE | 05-02-2013 |
20130117764 | Internode Data Communications In A Parallel Computer - Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory. | 05-09-2013 |
20130125139 | Logging In A Computer System - Logging in a computer system that includes high speed, low latency computer memory and non-volatile computer memory, including: for each transaction of a plurality of transactions in a transaction-based application: beginning execution of the transaction; storing one or more log messages in a message bundle in the high speed, low latency computer memory during execution of the transaction; and upon completion of the transaction, storing the message bundle in a messaging queue; asynchronously with regard to transaction execution: processing, by a logging module, the messaging queue, including identifying one or more log messages stored in message bundles in the messaging queue; and for each identified log message, writing, by the logging module, the log message to the non-volatile computer memory. | 05-16-2013 |
20130125140 | INTRANODE DATA COMMUNICATIONS IN A PARALLEL COMPUTER - Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory. | 05-16-2013 |
20130160028 | METHOD AND APPARATUS FOR LOW LATENCY COMMUNICATION AND SYNCHRONIZATION FOR MULTI-THREAD APPLICATIONS - A computing device, a communication/synchronization path or channel apparatus and a method for parallel processing of a plurality of processors. The parallel processing computing device includes a first processor having a first central processing unit (CPU) core, at least one second processor having a second central processing unit (CPU) core, and at least one communication/synchronization (com/syn) path or channel coupled between the first CPU core and the at least one second CPU core. The communication/synchronization channel can include a request message queue configured to receive request messages from the first CPU core and to send request messages to the second CPU core, and a response message queue configured to receive response messages from the second CPU core and to send response messages to the first CPU core. | 06-20-2013 |
20130239124 | Event Queue Management For Embedded Systems - An event management structure for an embedded system, which supports multiple waiters waiting on the same event without replicating the events for each waiter, is provided. Notifications of events are received from entities within an embedded system. The event management architecture then posts the events to a central queue and generates a unique identification tag for each posted event. Additionally, entities within the embedded system are allowed to wait on specific events. More specifically, entities may request access to specific events based on the unique identification tag associated with a particular event. In further implementations, data associated with queued events may be provided to the waiters. In some implementations, events matching a specific description since a particular event, identified by its unique identification tag, may be requested by entities in the embedded system. | 09-12-2013 |
20130247071 | SYSTEM AND METHOD FOR EFFICIENT SHARED BUFFER MANAGEMENT - A method for managing a shared buffer between a data processing system and a network. The method provides a communication interface unit for managing bandwidth of data between the data processing system and an external communicating interface connecting to the network. The method performs, by the communication interface unit, a combined de-queue and head drop operation on at least one data packet queue within a predefined number of clock cycles. The method also performs, by the communication interface unit, an en-queue operation on the at least one data packet queue in parallel with the combined de-queue operation and head drop operation within the predefined number of clock cycles. | 09-19-2013 |
20130283293 | System and method for Intelligently distributing a plurality of transactions for parallel processing - Disclosed are systems and methods for distributing a plurality of transactions for parallel processing, which includes receiving a message, such that each transaction comprises information associated with a target object, wherein the target object is stored in a memory. The systems and methods further include parsing the messages into the plurality of transactions, transmitting the parsed transactions to a transaction queue, receiving a transaction from the transaction queue, determining the target object associated with the transaction, assigning the transaction to a particular processing queue based on the target object, and guaranteeing that subsequent transactions associated with the target object are assigned to the same processing queue and the same processor, which guarantees that the target object will be modified in correct sequence. | 10-24-2013 |
20130290984 | Method for Infrastructure Messaging - A low overhead method to handle inter process and peer to peer communication. A queue manager is used to create a list of messages with minimal configuration overhead. A hardware queue can be connected to another software task owned by the same core or a different processor core, or connected to a hardware DMA peripheral. There is no limitation on how many messages can be queued between the producer and consumer cores. The low latency interrupt generation to the processor cores is handled by an accumulator inside the QMSS which can be configured to generate interrupts based on a programmable threshold of descriptors in a queue. The accumulator thus removes the polling overhead from software and boosts performance by doing the descriptor pops and message transfer in the background. | 10-31-2013 |
20130312010 | Processing Posted Receive Commands In A Parallel Computer - Processing posted receive commands in a parallel computer, including: posting, by a parallel process of a compute node, a receive command, the receive command including a set of parameters excluding the receive command from being directed among parallel posted receive queues; flattening the parallel unexpected message queues into a single unexpected message queue; determining whether the posted receive command is satisfied by an entry in the single unexpected message queue; if the posted receive command is satisfied by an entry in the single unexpected message queue, processing the posted receive command; if the posted receive command is not satisfied by an entry in the single unexpected message queue: flattening the parallel posted receive queues into a single posted receive queue; and storing the posted receive command in the single posted receive queue. | 11-21-2013 |
20130312011 | PROCESSING POSTED RECEIVE COMMANDS IN A PARALLEL COMPUTER - Processing posted receive commands in a parallel computer, including: posting, by a parallel process of a compute node, a receive command, the receive command including a set of parameters excluding the receive command from being directed among parallel posted receive queues; flattening the parallel unexpected message queues into a single unexpected message queue; determining whether the posted receive command is satisfied by an entry in the single unexpected message queue; if the posted receive command is satisfied by an entry in the single unexpected message queue, processing the posted receive command; if the posted receive command is not satisfied by an entry in the single unexpected message queue: flattening the parallel posted receive queues into a single posted receive queue; and storing the posted receive command in the single posted receive queue. | 11-21-2013 |
20130332941 | Adaptive Process Importance - A method and apparatus of a device that changes the importance of a daemon process is described. In an exemplary embodiment, the device receives a message from a user process destined for daemon process, wherein the daemon process executes independently of the user process and the first daemon process communicates messages with other executing processes. The device further determines if the first message indicates that the importance of the first daemon process can be changed. If the first message indicates the importance of the first daemon process can be changed, the device changes the importance of the first daemon process. The device additionally forwards the first message to the first daemon process. | 12-12-2013 |
20140013337 | COBOL REFERENCE ARCHITECTURE - The COBOL reference architecture (CRA) system and the transactional workflow driver (TWD) provide an efficient and effective way to extend an existing application using modern architecture techniques without rewriting the existing application. The (CRA) system and TWD provide a way to generate new and interchangeable COBOL language functionality for multiple interactive types (e.g., transaction servers and/or transaction managers) running in various computing environments, including: a WebSphere message queue (MQ) transaction server; a Customer Information Control System (CICS) transaction server; an Information Management System (IMS) transaction server; and a batch transaction manager. | 01-09-2014 |
20140068635 | IN-ORDER MESSAGE PROCESSING WITH MESSAGE-DEPENDENCY HANDLING - The disclosure generally describes computer-implemented methods, software, and systems for modeling and deploying decision services. One computer-implemented method includes operations for identifying a sequence number of a first message, the sequence number indicating a position of the first message within a first sequence of messages. If a second message positioned prior to the first message in the first sequence is in a final processing state and the second message in the first sequence is a parent message, a plurality of child messages associated with the second message are identified. Each child message is associated with a sequence number indicating a position of the child message within a second sequence associated with the plurality of child messages. The computer-implemented method determines whether a child message positioned at the end of the second sequence is in a final processing state. | 03-06-2014 |
20140096145 | HARDWARE MESSAGE QUEUES FOR INTRA-CLUSTER COMMUNICATION - A method and apparatus for sending and receiving messages between nodes on a compute cluster is provided. Communication between nodes on a compute cluster, which do not share physical memory, is performed by passing messages over an I/O subsystem. Typically, each node includes a synchronization mechanism, a thread ready to receive connections, and other threads to process and reassemble messages. Frequently, a separate queue is maintained in memory for each node on the I/O subsystem sending messages to the receiving node. Such overhead increases latency and limits message throughput. Due to a specialized coprocessor running on each node, messages on an I/O subsystem are sent, received, authenticated, synchronized, and reassembled at a faster rate and with lower latency. Additionally, the memory structure used may reduce memory consumption by storing messages from multiple sources in the same memory structure, eliminating the need for per-source queues. | 04-03-2014 |
20140109110 | SYSTEM AND METHOD FOR SUPPORTING ASYNCHRONOUS MESSAGE PROCESSING IN A DISTRIBUTED DATA GRID - A system and method can support asynchronous message processing in a distributed data grid. A cluster node in the distributed data grid can provide a message processor running on a message processing thread. The message processor can receive a request to process an incoming message from a service thread, wherein the request is associated with a continuation data structure. Then, the message processor can wrap the continuation data structure in a return message after processing the incoming message, and forward the return message to a service queue that is associated with the service thread. | 04-17-2014 |
20140149997 | SYSTEM AND METHOD FOR AUTOMATICALLY GENERATING COMPUTER CODE FOR MESSAGE FLOWS - Computer-executable code is automatically generated for a message flow in a message queuing infrastructure by determining a type of the message flow, inputting message flow parameters, and generating the computer-executable code based on the type of the message flow and the message flow parameters. The generation of code can also implement a design pattern, which is input based on the determined type of message flow. The computer-executable code can be, for example, Extended Structured Query Language (ESQL) code. The type of the message flow can identify, for example, a transformation requirement of the message flow. The transformation requirement can be, for example, one of (i) transformation from a first Extensible Markup Language (XML) message to a second XML message, (ii) transformation from an XML message to a Message Repository Manager (MRM) message, and (iii) transformation from a first MRM message to a second MRM message. | 05-29-2014 |
20140173630 | NON REAL-TIME METROLOGY DATA MANAGEMENT - The techniques described herein implement an operating system that can reliably process time sensitive information in non real-time manner. Thus, the operating system described herein is capable of processing an instance of time sensitive input during a time period after the instance of time sensitive input is received (e.g., at a future point in time). To accomplish this, the techniques timestamp each instance of time sensitive input when it is received at a device. The techniques then store the timestamped instance of time sensitive input in a temporary queue, and make the timestamped instance available to the operating system at a time period after the time period when it is received, as indicated by the timestamp. Additional techniques described herein prioritize the activation of a driver configured to receive the time sensitive information during a boot sequence or a reboot sequence. | 06-19-2014 |
20140173631 | TRACKING A RELATIVE ARRIVAL ORDER OF EVENTS BEING STORED IN MULTIPLE QUEUES USING A COUNTER - An order controller stores each received event in a separate entry in one of at least two queues with a separate counter value set from an arrival order counter at the time of storage, wherein the arrival order counter is incremented after storage of each of the received events and on overflow the arrival order counter wraps back to zero. The order controller calculates an absolute value of the difference between a first counter value stored with an active first next entry in a first queue from among the at least two queues and a second counter value stored with an active second next entry in a second queue from among the at least two queues. The order controller compares the absolute value with a counter midpoint value to determine whether the first counter value was stored before the second counter value. | 06-19-2014 |
20140215492 | DYNAMIC PROVISIONING OF MESSAGE GROUPS - The subject matter of this specification can be implemented in, among other things, a method that includes receiving, by a processing device, one or more first requests to add multiple messages on a message queue. The first requests specify a message group for the messages. The method further includes determining, by the processing device, that the message group does not exist on the message queue in response to receiving the first requests. The method further includes automatically creating, by the processing device, the message group on the message queue in response to determining that the message group does not exist on the message queue. The method further includes adding, by the processing device, the messages to the message group on the message queue. | 07-31-2014 |
20140245325 | LINK OPTIMIZATION FOR CALLOUT REQUEST MESSAGES - According to one aspect of the present disclosure, a method and technique for link optimization for callout request messages is disclosed. The method includes: monitoring a plurality of different time-based parameters for each of a plurality of links between a communication pipe of a host system and one or more service systems, the links used to send and receive callout request messages between one or more applications running on the host system and the services systems that process the callout request messages, the time-based parameters associated with different stages of callout request message processing by the communication pipe and the service systems; assessing a performance level of each of the plurality of links based on the time-based parameters; and dynamically distributing the callout request messages to select links of the plurality of links based on the performance assessment. | 08-28-2014 |
20140245326 | LOCAL MESSAGE QUEUE PROCESSING FOR CO-LOCATED WORKERS - Technologies are provided for locally processing queue requests from co-located workers. In some examples, information about the usage of remote datacenter queues by co-located workers may be used to determine one or more matched queues. Messages from local workers to a remote datacenter queue classified as a matched queue may be stored locally. Subsequently, local workers that request messages from matched queues may be provided with the locally-stored messages. | 08-28-2014 |
20140245327 | Method and System for Recovering Stranded Outbound Messages - A method for recovering and requeueing lost messages is disclosed. The lost messages are intended for delivery from a first computer program to a second computer program but are instead stranded in locations internal to the first program. The method extracts one or more of these stranded messages from the location internal to the first program, determines the original destination of each stranded message and delivers that message to the second program. Delivery of each message to the second program is facilitated by using message queues provided by middleware type software programs. The desired middleware program can be selected by the user of the method, and the method provides for the necessary formatting of each recovered message according to the selected middleware. Absent use of the present method, these stranded messages would not be routed to their original destinations. | 08-28-2014 |
20140282612 | Acknowledging Incoming Messages - Acknowledging incoming messages, including: determining, by an acknowledgement dispatching module, whether an incoming message has been received in an active message queue; responsive to determining that the incoming message has been received in the active message queue, resetting, by the acknowledgement dispatching module, an acknowledgment iteration counter; incrementing, by the acknowledgement dispatching module, the acknowledgment iteration counter; determining, by the acknowledgement dispatching module, whether the acknowledgment iteration counter has reached a predetermined threshold; and responsive to determining that the acknowledgment iteration counter has reached the predetermined threshold, processing, by the acknowledgement dispatching module, all messages in the active message queue. | 09-18-2014 |
20140282613 | Acknowledging Incoming Messages - Acknowledging incoming messages, including: determining, by an acknowledgement dispatching module, whether an incoming message has been received in an active message queue; responsive to determining that the incoming message has been received in the active message queue, resetting, by the acknowledgement dispatching module, an acknowledgment iteration counter; incrementing, by the acknowledgement dispatching module, the acknowledgment iteration counter; determining, by the acknowledgement dispatching module, whether the acknowledgment iteration counter has reached a predetermined threshold; and responsive to determining that the acknowledgment iteration counter has reached the predetermined threshold, processing, by the acknowledgement dispatching module, all messages in the active message queue. | 09-18-2014 |
20140289744 | TRANSACTION CAPABLE QUEUING - Transactional capable queuing is provided. A queue having an ordered list of messages is provided. A get cursor operation is provided within the queue to point to a current starting place for a getting application to start searching for a message to retrieve. A first lock is provided for putting operations, in response to there being more than one putting application, to ensure only one application is putting to the queue at a time. A second lock is provided for getting operations, in response to there being more than one getting application, to ensure that only one application is getting from the queue at a time. Putting applications and getting applications are synchronized to check and update the get cursor operation. | 09-25-2014 |
20140298357 | OPERATING SYSTEM AND ARCHITECTURE FOR EMBEDDED SYSTEM - An operating system for an aircraft according to an exemplary aspect of the present disclosure includes, among other things, a core services layer and a hardware interface layer that is time and space partitioned from the core services layer. The hardware interface layer is operable to control communications with hardware in a computer. | 10-02-2014 |
20140366039 | COMPUTER SYSTEM, COMPUTER-IMPLEMENTED METHOD AND COMPUTER PROGRAMPRODUCT FOR SEQUENCING INCOMING MESSAGES FOR PROCESSING AT AN APPLICATION - In one aspect, the present application is directed to a computer system, a computer-implemented method and a computer program product for processing at an application. The computer system may comprise an application operable to process incoming messages, wherein at least two of the incoming messages are correlated, wherein correlated messages need processing at the application in a required order; and a sequencing framework implemented with the application to intercept the incoming messages and comprising an internal buffer to identify the correlated messages and to buffer the correlated messages as a message group with the required order, wherein the sequencing framework interacts with the application by transferring the incoming messages from the internal buffer in the required order to the application for processing. | 12-11-2014 |
20150020080 | MESSAGE-BASED MODELING - A system and method may generate executable models having message sending objects and message receiving objects. A message may include a fixed data payload, and the message may persist for only a determined time interval of a total execution or simulation time of model. Message queues may be established for the messages, and the queues may have attributes. The model may include a state-based portion having states and transitions. States may be configured to generate and send messages, and to receive and process messages. In addition, transitions may be guarded by particular messages. The system and method also may generate standalone code, such as source code, for the model. The standalone code may include code that establishes a message passing service to support the sending and receiving of messages. | 01-15-2015 |
20150040140 | Consuming Ordered Streams of Messages in a Message Oriented Middleware - A mechanism is provided for consuming ordered streams of messages in a message oriented middleware having a single queue. The mechanism provides a first consuming application thread to process a first message, locks the first message when available on the queue to the first application thread and locking all subsequent messages on the queue with the same stream identifier as the first message to the first application thread, and identifies any messages with different stream identifiers currently locked to the first application thread, and making available the further messages to other application threads; delivering the first message. The mechanism also provides a second consuming application thread to process a subsequent message, locks a next unlocked message when available on the queue to the second consuming application, and locks all subsequent messages on the queue with the same stream identifier as the next unlocked message to the second consuming application thread. | 02-05-2015 |
20150100970 | APPLICATION-DRIVEN SHARED DEVICE QUEUE POLLING - Methods and systems relate to receiving, at a system device, a first request from an operating system, the first request identifying shared queue and providing an instruction to the system device to enable polling of the identified shared queue, enabling, by a processing device, polling of the identified shared queue, wherein enabling polling comprises identifying a message in the identified shared queue and polling information related to the identified shared queue, and disabling, by the processing device, a device interrupt associated with the message in the identified shared queue. | 04-09-2015 |
20150317133 | COBOL REFERENCE ARCHITECTURE - The COBOL reference architecture (CRA) system and the transactional workflow driver (TWD) provide an efficient and effective way to extend an existing application using modern architecture techniques without rewriting the existing application. The (CRA) system and TWD provide a way to generate new and interchangeable COBOL language functionality for multiple interactive types (e.g., transaction servers and/or transaction managers) running in various computing environments, including: a WebSphere message queue (MQ) transaction server; a Customer Information Control System (CICS) transaction server; an Information Management System (IMS) transaction server; and a batch transaction manager. | 11-05-2015 |
20150324243 | SEMICONDUCTOR DEVICE INCLUDING A PLURALITY OF PROCESSORS AND A METHOD OF OPERATING THE SAME - A semiconductor device may include a first processor transferring a plurality of command data sets, a mailbox receiving and storing the plurality of command data sets, and a second processor receiving command data sets of the mailbox, wherein the first processor may transfer at least one abort slot number to the mailbox, and wherein the mailbox may search and abort a command data set having a slot number which is identical to an abort slot number among the plurality of command data sets. | 11-12-2015 |
20150324244 | SYSTEM AND METHOD FOR A SMART OPERATING SYSTEM FOR INTEGRATING DYNAMIC CASE MANAGEMENT INTO A PROCESS MANAGEMENT PLATFORM - This disclosure relates generally to Error! Reference source not found, and more particularly to systems and methods for a smart operating system for integrating dynamic case management into a process management platform. In one embodiment, a computer-implemented dynamic case management method includes creating a plurality of lightweight stateless computing processes; placing the processes in a WAIT state; receiving a request to initiate a process instance corresponding to a lightweight stateless process; placing at least one of the processes in an EXECUTING state; processing the process instance by the processes placed in the EXECUTING state; determining a next process for the process instance; and routing the process instance to the next process. | 11-12-2015 |
20150347208 | MECHANISMS AND APPARATUS FOR EMBEDDED CONTROLLER RECONFIGURABLE INTER-PROCESSOR COMMUNICATIONS - A system and method for reconfigurable inter-processor communications in a controller. The system and method include providing multiple processors in the controller and generating a send buffer and a receive buffer for each of the processors. The system and method further include generating a send table and a receive table for each of the processors where the send table stores identifying information about messages being sent and where the receive table stores identifying information about messages being received, and providing infrastructure services that include protocols for sending and receiving messages between multiple processors in the controller. | 12-03-2015 |
20150355956 | METHOD, APPARATUS AND COMPUTER PROGRAM FOR ADMINISTERING MESSAGES WHICH A CONSUMING APPLICATION FAILS TO PROCESS - Disclosed is a method for administering messages. In response to a determination that one or more consuming applications have failed to process the same message on a queue a predetermined number of times, the message is made unavailable to consuming applications. Responsive to determining that a predetermined number of messages have been made unavailable to consuming applications, one or more consuming applications are prevented from consuming messages from the queue. | 12-10-2015 |
20160004478 | WAIT-FREE ALGORITHM FOR INTER-CORE, INTER-PROCESS, OR INTER-TASK COMMUNICATION - A method and system are presented for providing deterministic inter-core, inter-process, and inter-thread communication between a reader and a writer. The reader and writer communicate by passing data through a shared memory using double buffering of double buffers. The shared memory includes a first double buffer and a second double buffer. Both double buffers include a first low level buffer and a second low level buffer. Using double buffering of the double buffers, both the reader and the writer may simultaneously access the shared memory. | 01-07-2016 |
20160011920 | MESSAGE-BASED MODELING | 01-14-2016 |
20160055042 | Detecting and Managing Flooding of Multi-tenant Message Queues - A messaging system implements messaging among application servers and databases, utilizing other servers that implement messaging brokers. A large flood of incoming messages can bring down messaging brokers by overflowing the message queues, negatively impacting performance of the overall system. This disclosure in some embodiments detects and identifies “flooders” in a timely manner and isolates their message traffic to dedicated queues to avoid impacting other system users. Subsequently, a preferred system de-allocates the queues and returns the messaging system to normal operation when flooding conditions subside, and “sweeps” up any remaining orphan messages. | 02-25-2016 |
20160098306 | HARDWARE QUEUE AUTOMATION FOR HARDWARE ENGINES - In general, techniques are described for performing hardware-based queue automation for hardware engines. An apparatus comprising a hardware engine and a hardware event queue manager may be configured to perform the techniques. The hardware event queue manager may be configured to receive, from a processing unit separate from the hardware event queue manager, an event to be processed by the hardware engine, and perform queue management with respect to an event queue to schedule processing of the event by the hardware engine. | 04-07-2016 |
20160147563 | Method and Apparatus for Brought-In Device Communication Request Handling - A system includes a processor configured to receive an incoming message request identifying a requesting application and requested user interface. The processor is also configured to determine an incoming message priority value. The processor is further configured to determine a message type. Also, the processor is configured to determine a driver attention demand value and provide access to the requested user interface when the priority value, message type, and driver attention demand value match parameters defined for the requested user interface. | 05-26-2016 |
20160179590 | ADDRESSING FOR INTER-THREAD PUSH COMMUNICATION | 06-23-2016 |
20160179592 | ADDRESSING FOR INTER-THREAD PUSH COMMUNICATION | 06-23-2016 |
20160179593 | PUSH INSTRUCTION FOR PUSHING A MESSAGE PAYLOAD FROM A SENDING THREAD TO A RECEIVING THREAD | 06-23-2016 |