Patent application number | Description | Published |
20090327361 | DATA REPLICATION FEEDBACK FOR TRANSPORT INPUT/OUTPUT - Architecture for efficiently ensuring that data is stored to the desired destination datastore such as for replication processes. A copy of data (e.g., messages) sent to a datastore for storage is stored at an alternate location until a received signal indicates that the storage and replication was successful. As soon as the feedback signal is received, the copy is removed from the alternate location, and hence, improves input/output (I/O) and storage patterns. The feedback mechanism can also be used for monitoring the status of data transport associated with log shipping, for example, and taking the appropriate actions when storage (e.g., replication) is not being performed properly. | 12-31-2009 |
20100178902 | ADDRESS BOOK REMOTE ACCESS AND EXTENSIBILITY - Address book data available to a user such as contact information, group information, resource information, and similar data, are retrieved from a plurality of sources by a third party service. The retrieved data is customized for consistent presentation and provided to the user without the user's application having to communicate with individual data sources for retrieving the data. | 07-15-2010 |
20100185735 | EXTENSIBILITY FOR HOSTED MESSAGING SERVERS - Architecture for messaging server extensibility without the need to update or make changes to the messaging server by routing selected messages to a remote location for processing by custom code or third-party code. The messaging server routes the selected messages based server analysis of the messages and in view of configuration data (or conditions) for routing messages. The remote location processes the message and can instruct the messaging server to accept, reject, or redirect the message. Additionally, the remote location can modify the message and instruct the messaging server to process the modified message. The hosted organization can configure triggers to have the messaging server call to a web service with the messages, which extends the functionality of the messaging server. | 07-22-2010 |
20100191810 | TRANSPORT HIGH AVAILABILITY FOR SIDE EFFECT MESSAGES - Architecture that protects side effect messages by associating the side effect messages with a primary (redundant) message that was received by a transport mechanism (e.g., a message transport agent). Side effect messages are considered “side effects” of a primary message that caused generation of the side effect messages. The primary message is only considered fully delivered after the primary message and all associated side effect messages are delivered, after which the source of the primary message is ACK'd (sent an “ACKnowledgement” message). Hence, in case of hardware failures after the primary message was delivered, but before delivery of side effect messages, the redundancy approach used triggers re-delivery of the primary message and re-generation and delivery of the side effect messages. | 07-29-2010 |
20100205257 | TRANSPORT HIGH AVAILABILITY VIA ACKNOWLEDGE MANAGEMENT - Architecture that facilitates transport high availability for messaging services by providing the ability of a receiving entity (e.g., receiving message transfer agent (MTA)) to detect if a sending entity (e.g., sending MTA or client) is a legacy sending entity. When the receiving entity detects that the sending entity is a legacy system, by advertising transport high availability capability to the sending entity, if the sending entity does not opt-in to this capability, the receiving entity keeps the sending entity client “on hold”, that is, waiting for an acknowledgement (ACK) until the receiving entity delivers the message to the next hops (immediate destinations). This approach maintains at least two copies of the message until the message is successfully delivered (to the next hop(s)). Hence, if the legacy sending entity or the receiving entity fails, the message is still delivered successfully. | 08-12-2010 |
20100306535 | Business To Business Secure Mail - Business to business secure mail may be provided. Consistent with embodiments of the invention, a protected message may be received. The recipient may request a token from a trust broker, submit the token to an authorization server associated with the sender, receive a user license from the authorization server; and decrypt the protected message using the user license. The protected message may restrict actions that may be taken by the recipient, such as forwarding to other users. | 12-02-2010 |
20100325215 | MESSAGE REQUIREMENTS BASED ROUTING OF MESSAGES - Architecture for enabling messages to be routed between network servers based on message requirements related to version, capabilities, and features, for example. The message requirements designate delivery over a transport path compatible with the message requirements. The message requirements can include a particular version or other features related to different software applications that require compatibility in message handling. Routing information is maintained related to a transport server or other network transport entity compatible with the message requirements and through which the message can be routed. The message is routed to the compatible transport server for delivery to the destination while avoiding delivery to transport servers incompatible with the message requirements. | 12-23-2010 |
20110219387 | Interactive Remote Troubleshooting of a Running Process - A computing device includes a registered target software process including at least one software component configured to support functionality of the at least one target software and identifiable by a unique component identification parameter, and a first communication module configured to receive a data access request comprising a request to access internal process data of the at least one software component. The process also includes an access manager module linked to the at least one software component and the first communication module, the access manager being configured to receive the data access request from the first communication module and call an interface implementation of the software component that executes the targeted data access request and returns requested internal process data to the access manager, wherein the internal process data is retrieved as the at least one software components is executing on the computing device. | 09-08-2011 |
20110246824 | THROTTLING NON-DELIVERY REPORTS BASED ON ROOT CAUSE - A root cause for a failed attempted delivery of a message is attempted to be determined before sending a non-delivery report (NDR) for the failed message. When a message fails without a known cause, the root cause is determined using the context of the message. For a given context, the root cause may be determined by a single failure or it may be determined by the relative number of failed messages of same context. While determining the root cause of the problem, any messages failing delivery are deferred from being delivered, as is generation of the corresponding NDR(s), to allow time for corrective action to occur. If the problem is resolved within a predetermined time period, the deferred messages are delivered without having to issue NDR(s). | 10-06-2011 |
20120150964 | Using E-Mail Message Characteristics for Prioritization - Message prioritization may be provided. First, a message may be received and a priority level may be calculated for the message. If the message is not rejected for having a priority lower than a predetermined threshold, the message may be placed in a first priority queue. Next, the message may be de-queued from the first priority queue based upon the calculated priority level for the message. Distribution group recipients corresponding to the message may then be expanded and the priority level for the message may be re-calculated based upon the expanded distribution group recipients. Next, the message may be placed in a second priority queue. The message may then be de-queued from the second priority queue based upon the re-calculated priority level for the message and delivered. | 06-14-2012 |
20120159514 | CONDITIONAL DEFERRED QUEUING - Conditional deferred queuing may be provided. Upon receiving a message, one or more throttle conditions associated with the message may be identified. A lock associated with the throttle condition may be created on the message until the throttle condition is satisfied. Then, the lock on the message may be removed and the message may be delivered. | 06-21-2012 |
20120185926 | Directory Driven Mailbox Migrations - An example method for migrating communication data from a source server to a target server includes obtaining, using a computing device, a set of credentials to access the source server, and accessing the source server using the set of credentials. The method also includes requesting, automatically by the computing device, a directory structure associated with communication data from the source server, populating, by the computing device, the target server using the directory structure, requesting the communication data from the source server, and populating the target server with the communication data. | 07-19-2012 |
20120290880 | Real-Time Diagnostics Pipeline for Large Scale Services - Real-time diagnostics may be provided. A plurality of data feeds may be aggregated from at least one of a plurality of nodes. Upon determining that at least one element of at least one of the data feeds meets a trigger condition, an action associated with the trigger condition may be executed. | 11-15-2012 |
20130103774 | TRANSPORT HIGH AVAILABILITY VIA ACKNOWLEDGE MANAGEMENT - Architecture that facilitates transport high availability for messaging services by providing the ability of a receiving entity (e.g., receiving message transfer agent (MTA)) to detect if a sending entity (e.g., sending MTA or client) is a legacy sending entity. When the receiving entity detects that the sending entity is a legacy system, by advertising transport high availability capability to the sending entity, if the sending entity does not opt-in to this capability, the receiving entity keeps the sending entity client “on hold”, that is, waiting for an acknowledgement (ACK) until the receiving entity delivers the message to the next hops (immediate destinations). This approach maintains at least two copies of the message until the message is successfully delivered (to the next hop(s)). Hence, if the legacy sending entity or the receiving entity fails, the message is still delivered successfully. | 04-25-2013 |
20130152097 | Resource Health Based Scheduling of Workload Tasks - A computer-implemented method for allocating threads includes: receiving a registration of a workload, the registration including a workload classification and a workload priority;
| 06-13-2013 |
20130159365 | Using Distributed Source Control in a Centralized Source Control Environment - A method is presented for using a distributed source control system with a centralized source control system. A first set of files is obtained from a source control repository and stored on a first electronic computing device. The first set of files comprises all or part of a code base in the centralized source control system. A request is received for at least part of the code base from a second electronic computing device in a distributed source control system. As a result of the request, at least a part of the first set of files is sent to the second electronic computing device. A change set for the first set of files is received from the second electronic computing device. The change set is processed to be in a format compatible with the source control repository. The change set is submitted to the source control repository. | 06-20-2013 |
20130185427 | TRAFFIC SHAPING BASED ON REQUEST RESOURCE USAGE - A current request for a server to perform work for a user profile can be received and processed at the server. It can be determined whether server usage by the profile exhibits a sufficient trend toward a threshold value to warrant performing traffic shaping for the user profile. If so, then a delay time can be calculated based on, or as a function of, server resources used in processing the current request, and a response to the current request can be delayed by the delay time. | 07-18-2013 |
20130197906 | TECHNIQUES TO NORMALIZE NAMES EFFICIENTLY FOR NAME-BASED SPEECH RECOGNITNION GRAMMARS - Techniques to normalize names for name-based speech recognition grammars are described. Some embodiments are particularly directed to techniques to normalize names for name-based speech recognition grammars more efficiently by caching, and on a per-culture basis. A technique may comprise receiving a name for normalization, during name processing for a name-based speech grammar generating process. A normalization cache may be examined to determine if the name is already in the cache in a normalized form. When the name is not already in the cache, the name may be normalized and added to the cache. When the name is in the cache, the normalization result may be retrieved and passed to the next processing step. Other embodiments are described and claimed. | 08-01-2013 |