Patent application number | Description | Published |
20080229197 | DYNAMIC AND INTELLIGENT HOVER ASSISTANCE - A method, system and article of manufacture for hover help management in data processing systems and, more particularly, for providing dynamic and intelligent hover assistance in graphical user interfaces. One embodiment provides a method of displaying hover assistance on a display screen. The method comprises moving a pointer element to a position over a user interface element shown on the display screen in response to user manipulation of a pointing device, while the pointer element is positioned over the user interface element, invoking a first hover element for display on the display screen, and invoking a second hover element for display on the display screen after invoking the first hover element, and while the pointer element continues to be positioned over the user interface element. | 09-18-2008 |
20080256166 | Method for Inter-Site Data Stream Transfer in a Cooperative Data Stream Processing - A cooperative data stream processing system is provided that utilizes a plurality of independent, autonomous and potentially heterogeneous sites in a cooperative arrangement to process user-defined inquiries over dynamic, continuous streams of data. The system derives jobs from the inquiries and these jobs are executed on the various distributed sites by executing applications containing processing elements on these sites. Therefore, components of a given job can be executed simultaneously and in parallel on a plurality of sites within in the system. The sites associated with a given job execution have the need to share data, both primal and derived. A tunnel mechanism is provided that establishes tunnels between pairs of sites within the system. Each tunnel includes either a sink processing element on an originating site and a source processing element on a destination site or a gateway processing element on each site and a network connection between the sink and source processing elements. The sink and source processing elements are in communication with application processing elements on their respective sites and facilitate the exchange of data between these application processing elements. Tunnels can be establish on demand or in accordance with a prescribed plan and can be job specific of generic to any job executing on a given pair of sites. | 10-16-2008 |
20080256167 | Mechanism for Execution of Multi-Site Jobs in a Data Stream Processing System - A cooperative data stream processing system is provided that utilizes a plurality of independent, autonomous and possibly heterogeneous sites in a cooperative arrangement to process user-defined job requests over dynamic, continuous streams of data. A mechanism is provided for orchestrating the execution of distributed jobs across the plurality of distributed sites. A distributed plan is created that identifies the processing elements that constitute a job that is derived form user-defined inquiries. Within the distributed plan, these processing elements are arranged into subjobs that are mapped to various sites within the system for execution. Therefore, the jobs are then executed across the plurality of distributed sites in accordance with the distributed plan. The distributed plan also includes requirements for monitoring of execution sites and providing for the back-up of the execution sites in the event of a failure on one of those sites. Execution of the jobs in accordance with the distributed plan is facilitated by the identification of an owner site to which the distributed plan is communicated and which is responsible for driving the execution of the distributed plan. | 10-16-2008 |
20080256253 | Method and Apparatus for Cooperative Data Stream Processing - A cooperative data stream processing system is provided that utilizes a plurality of independent, autonomous and possibly heterogeneous sites in a cooperative arrangement to process user-defined job requests over dynamic, continuous streams of data. The sites negotiate peering relationships to share data and processing resources to handle the submitted job requests. These peering relationships can be cooperative or federated and can be expressed using common interest policies. Each site within the system runs an instance of a system architecture for processing job requests and is therefore a self-contained, fully functional instance of the cooperative data stream processing system. | 10-16-2008 |
20080256548 | Method for the Interoperation of Virtual Organizations - A cooperative data stream processing system is provided that utilizes a plurality of independent, autonomous and possibly heterogeneous sites in a cooperative arrangement to process user-defined job requests over dynamic, continuous streams of data. A method is provided to organize the distributed sites into a plurality of virtual organizations that can be further combined and virtualized into virtualized virtual organizations. These virtualized virtual organizations can also include additional distributed sites and existing virtualized virtual organizations and all members of a given virtualized virtual organization can share data and processing resources in order to process jobs on either a task-based or goal-based allocation mechanism. The virtualized virtual organization is created dynamically using ad-hoc collaborations among the members and is arranged in either a federated or cooperative architecture. Collaborations between members is either tightly-coupled or loosely coupled. Flexible management of resources is provided with resources being provided under exclusive control or based on best-effort access. | 10-16-2008 |
20090262656 | METHOD FOR NEW RESOURCE TO COMMUNICATE AND ACTIVATE MONITORING OF BEST PRACTICE METRICS AND THRESHOLDS VALUES - A method is provided for monitoring a resource by utilizing proxy metrics provided by a dependent resource. A primary resource is recognized by a dependent resource, where the dependent resource is dependent upon certain capabilities of the primary resource. Metrics of the primary resource upon which the dependent resource needs are determined. Thresholds related to the metrics of the primary resource are determined. The dependent resource communicates the metrics and related thresholds to a central management tool. The metrics and related thresholds are monitored. Also, the dependent resource may act as a proxy for the primary resource, where the central management tool monitors the metrics and the related thresholds of the primary resource via the dependent resource. | 10-22-2009 |
20090299667 | Qualifying Data Produced By An Application Carried Out Using A Plurality Of Pluggable Processing Components - Methods, apparatus, and products are disclosed for qualifying data produced by an application carried out using a plurality of pluggable processing components. Qualifying data produced by the application includes: receiving, by an application manager, quality metrics for one of the pluggable processing components; determining, by the application manager, a component quality rating for the pluggable processing component in dependence upon the quality metrics; and assigning, by the application manager, a data quality rating to application data for the application in dependence upon the component quality rating for the pluggable processing component. | 12-03-2009 |
20090300154 | Managing performance of a job performed in a distributed computing system - Methods, systems, and products are disclosed for managing performance of a job performed in a distributed computing system, the distributed computing system comprising a plurality of compute nodes operatively coupled through a data communications network, the job carried out by a plurality of distributed pluggable processing components executing on the plurality of compute nodes, that include: identifying a current configuration of the pluggable processing components carrying out the job, the current configuration specifying a current distribution of the pluggable processing components among the compute nodes; identifying a network topology of the plurality of compute nodes in the data communications network; receiving a plurality of performance indicators produced during execution of the job; and redistributing, to a different compute node, at least one of the pluggable processing components in dependence upon the current configuration, the network topology, and the performance indicators. | 12-03-2009 |
20090300404 | Managing Execution Stability Of An Application Carried Out Using A Plurality Of Pluggable Processing Components - Methods, apparatus, and products are disclosed for managing execution stability of an application carried out using a plurality of pluggable processing components. Managing execution stability of an application includes: receiving, by an application manager, component stability metrics for a particular pluggable processing component; determining, by the application manager, that the particular pluggable processing component is unstable in dependence upon the component stability metrics for the particular pluggable processing component; and notifying, by the application manager, a system administrator that the particular pluggable processing component is unstable. | 12-03-2009 |
20090300624 | Tracking data processing in an application carried out on a distributed computing system - Methods, systems, and products are disclosed for tracking data processing in an application carried out on a distributed computing system, the distributed computing system including a plurality of computing nodes connected through a data communications network, the application carried out by a plurality of pluggable processing components installed on the plurality of computing nodes, the pluggable processing components including a pluggable processing provider component and a pluggable processing consumer component, that include: identifying, by the provider component, data satisfying predetermined processing criteria, the criteria specifying that the data is relevant to processing provided by the consumer component; passing, by the provider component, the data to the next pluggable processing component in the application for processing, including maintaining access to the data; receiving, by the consumer component, the data during execution of the application; and sending, by the consumer component, a receipt indicating that the consumer component received the data. | 12-03-2009 |
20090300625 | Managing The Performance Of An Application Carried Out Using A Plurality Of Pluggable Processing Components - Methods, apparatus, and products are disclosed for managing the performance of an application carried out using a plurality of pluggable processing components, the pluggable processing components executed on a plurality of compute nodes, that include: identifying a current configuration of the pluggable processing components for carrying out the application; receiving a plurality of performance indicators produced during execution of the pluggable processing components; and altering the current configuration of the pluggable processing components in dependence upon the performance indicators and one or more additional pluggable processing components. | 12-03-2009 |
20110302659 | DATA SECURITY IN A MULTI-NODAL ENVIRONMENT - A data security manager in a multi-nodal environment enforces processing constraints stored as security relationships that control how different pieces of a multi-nodal application (called execution units) are allowed to execute to insure data security. The security manager preferably checks the security relationships for security violations when new execution units start execution, when data moves to or from an execution unit, and when an execution unit requests external services. Where the security manager determines there is a security violation based on the security relationships, the security manager may move, delay or kill an execution unit to maintain data security. | 12-08-2011 |
20110321056 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects metrics of the system, nodes, application, jobs and processing units that will be used to determine how to best allocate the jobs on the system. A job optimizer analyzes the collected metrics to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where the processing units are over utilizing the resources on the node. | 12-29-2011 |
20120017218 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node. | 01-19-2012 |
20120038667 | Replicating Changes Between Corresponding Objects - Embodiments of the invention generally relate to replicating changes between corresponding real objects and virtual objects in a virtual world. Embodiments of the invention may include receiving a request to generate a virtual item in a virtual world based on a real-world object, generating the virtual item, synchronizing the virtual item and real-world object, and sharing the virtual item with a second avatar in the virtual world. | 02-16-2012 |
20120047101 | DETECTING DISALLOWED COMBINATIONS OF DATA WITHIN A PROCESSING ELEMENT - Techniques are described for detecting disallowed combinations of data within a processing element. Embodiments of the invention may generally receive data to be processed using the processing element and determine whether the received data and a current working state violate one or more rules describing disallowed combinations of data. If a disallowed combination is detected, embodiments of the invention may handle the processing of the received data in an alternate way that prevents disallowed combinations of data within the processing element. | 02-23-2012 |
20120047505 | PREDICTIVE REMOVAL OF RUNTIME DATA USING ATTRIBUTE CHARACTERIZING - Techniques are described for selectively removing runtime data from a stream-based application in a manner that reduces the impact of any delay caused by the processing of the data in the stream-based application. In addition to removing the data from a primary processing path of the stream-based application, the data may be processed in an alternate manner, either using alternate processing resources, or by delaying the processing of the data. | 02-23-2012 |
20120110182 | DYNAMIC PROCESSING UNIT RELOCATION IN A MULTI-NODAL ENVIRONMENT BASED ON INCOMING PHYSICAL DATA - A relocation mechanism in a multi-nodal computer environment dynamically routes processing units in a distributed computer system based on incoming physical data into the processing unit. The relocation mechanism makes an initial location decision to place a processing unit onto a node in the distributed computer system. The relocation mechanism monitors physical data flowing into a processing unit or node and dynamically relocates the processing unit to another type of node within the ‘cloud’ of nodes based on the type of physical data or pattern of data flowing into the processing unit. The relocation mechanism may use one or more rules with criteria for different data types observed in the data flow to optimize when to relocate the processing units. | 05-03-2012 |
20120216014 | APPLYING ADVANCED ENERGY MANAGER IN A DISTRIBUTED ENVIRONMENT - Techniques are described for abating the negative effects of wait conditions in a distributed system by temporarily decreasing the execution time of processing elements. Embodiments of the invention may generally identify wait conditions from an operator graph and detect the slowest processing element preceding the wait condition based on either historical information or real-time data. Once identified, the slowest processing element may be sped up to lessen the negative consequences of the wait condition. Alternatively, if the slowest processing element shares the same compute node with another processing element in the distributed system, one of the processing elements may be transferred to a different compute node to free additional computing resources on the compute node. | 08-23-2012 |
20120310984 | DATA SECURITY FOR A DATABASE IN A MULTI-NODAL ENVIRONMENT - A security mechanism in a database management system enforces processing restrictions stored as metadata to control how different pieces of a multi-nodal application are allowed to access database data to provide data security. The security mechanism preferably checks the data security restrictions for security violations when an execution unit attempts to access the data to insure the nodal conditions are appropriate for access. When the security mechanism determines there is a security violation by a query from an execution unit based on the security restrictions, the security mechanism may send, delay or retry to maintain data security. Nodal conditions herein include time restrictions and relationships with other columns, rows or pieces of information. For example, multiple processing units may be allowed to execute together, but the security mechanism would prohibit these processing units to access specific pieces of information at the same time through the use of metadata in the database. | 12-06-2012 |
20120311172 | OVERLOADING PROCESSING UNITS IN A DISTRIBUTED ENVIRONMENT - Techniques are disclosed for overloading, at one or more nodes, an output of data streams containing data tuples. A first plurality of tuples is received via a first data stream and a second plurality of tuples is received via a second data stream. A first value associated with the first data stream and a second value associated with the second data stream are established based on a specified metric. A third plurality of tuples is output based on the first value and the second value, wherein the third plurality of tuples is a subset of the first plurality of tuples and the second plurality of tuples. | 12-06-2012 |
20130031556 | DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. In another embodiment, if a request to add one or more processing elements may cause future backpressure, the request may be refused. | 01-31-2013 |
20130067370 | SMART DISPLAY - A smart display allows a user to build custom layouts of user interface blocks on the smart display independent of the software on the computer creating the user interface. A customization mechanism in the smart display allows a user to select portions of a user interface and move them to different positions on the display. The customization mechanism creates custom layout metadata that defines a screen offset for portions of a user interface moved by the user. The smart display monitors the incoming display data and re-assigns pixel rendering data to the new location in the moved user interface blocks as the data coming from the computer application changes. | 03-14-2013 |
20130073824 | COPYING SEGMENTS OF A VIRTUAL RESOURCE DEFINITION - Segments of a virtual resource definition are copied from an existing virtual resource to create a new virtual resource definition or modifying an existing one to simplify virtualization management. The virtualization manager divides a virtual resource definition into a number of reusable segments. A user may then select one or more segments and place them into a new or existing virtual resource definition. The user can choose to mix and match segments to quickly create or modify a virtual resource definition such as a virtual server, virtual printer or virtual data storage. Any default information in the new virtual resource or old information in the existing resource is replaced by the information in the copied segment. Any dependencies in the existing virtual resource are resolved with user input to break the dependencies or copy dependent data. | 03-21-2013 |
20130074071 | COPYING SEGMENTS OF A VIRTUAL RESOURCE DEFINITION - Segments of a virtual resource definition are copied from an existing virtual resource to create a new virtual resource definition or modifying an existing one to simplify virtualization management. The virtualization manager divides a virtual resource definition into a number of reusable segments. A user may then select one or more segments and place them into a new or existing virtual resource definition. The user can choose to mix and match segments to quickly create or modify a virtual resource definition such as a virtual server, virtual printer or virtual data storage. Any default information in the new virtual resource or old information in the existing resource is replaced by the information in the copied segment. Any dependencies in the existing virtual resource are resolved with user input to break the dependencies or copy dependent data. | 03-21-2013 |
20130074146 | DATA SECURITY FOR A DATABASE IN A MULTI-NODAL ENVIRONMENT - A security mechanism in a database management system enforces processing restrictions stored as metadata to control how different pieces of a multi-nodal application are allowed to access database data to provide data security. The security mechanism preferably checks the data security restrictions for security violations when an execution unit attempts to access the data to insure the nodal conditions are appropriate for access. When the security mechanism determines there is a security violation by a query from an execution unit based on the security restrictions, the security mechanism may send, delay or retry to maintain data security. Nodal conditions herein include time restrictions and relationships with other columns, rows or pieces of information. For example, multiple processing units may execute together, but the security mechanism would prohibit these processing units to access specific pieces of information at the same time through the use of metadata in the database. | 03-21-2013 |
20130074192 | DATA SECURITY IN A MULTI-NODAL ENVIRONMENT - A data security manager in a multi-nodal environment enforces processing constraints stored as security relationships that control how different pieces of a multi-nodal application (called execution units) are allowed to execute to insure data security. The security manager preferably checks the security relationships for security violations when new execution units start execution, when data moves to or from an execution unit, and when an execution unit requests external services. Where the security manager determines there is a security violation based on the security relationships, the security manager may move, delay or kill an execution unit to maintain data security. | 03-21-2013 |
20130080654 | OVERLOADING PROCESSING UNITS IN A DISTRIBUTED ENVIRONMENT - Techniques are disclosed for overloading, at one or more nodes, an output of data streams containing data tuples. A first plurality of tuples is received via a first data stream and a second plurality of tuples is received via a second data stream. A first value associated with the first data stream and a second value associated with the second data stream are established based on a specified metric. A third plurality of tuples is output based on the first value and the second value, wherein the third plurality of tuples is a subset of the first plurality of tuples and the second plurality of tuples. | 03-28-2013 |
20130081042 | DYNAMIC REDUCTION OF STREAM BACKPRESSURE - Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained. | 03-28-2013 |
20130097323 | DYNAMIC PROCESSING UNIT RELOCATION IN A MULTI-NODAL ENVIRONMENT BASED ON INCOMING PHYSICAL DATA - A relocation mechanism in a multi-nodal computer environment dynamically routes processing units in a distributed computer system based on incoming physical data into the processing unit. The relocation mechanism makes an initial location decision to place a processing unit onto a node in the distributed computer system. The relocation mechanism monitors physical data flowing into a processing unit or node and dynamically relocates the processing unit to another type of node within the ‘cloud’ of nodes based on the type of physical data or pattern of data flowing into the processing unit. The relocation mechanism may use one or more rules with criteria for different data types observed in the data flow to optimize when to relocate the processing units. | 04-18-2013 |
20130097612 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node. | 04-18-2013 |
20130117699 | GRANTING OBJECT AUTHORITY VIA A MULTI-TOUCH SCREEN TO A COLLABORATOR - In an embodiment, in response to a gesture by an administrator, a security palette is created and displayed on a multi-touch screen. In response to a move by the administrator of a first icon to within the security palette, wherein the first icon represents a first object, a same authority that the administrator has to the first object is granted to the security palette. In response to a collaborator touching the security palette, the same authority to the first object is granted to the collaborator. | 05-09-2013 |
20130124446 | DETECTING DISALLOWED COMBINATIONS OF DATA WITHIN A PROCESSING ELEMENT - Techniques are described for detecting disallowed combinations of data within a processing element. Embodiments of the invention may generally receive data to be processed using the processing element and determine whether the received data and a current working state violate one or more rules describing disallowed combinations of data. If a disallowed combination is detected, embodiments of the invention may handle the processing of the received data in an alternate way that prevents disallowed combinations of data within the processing element. | 05-16-2013 |
20130124599 | DYNAMIC RESOURCE ADJUSTMENT FOR A DISTRIBUTED PROCESS ON A MULTI-NODE COMPUTER SYSTEM - A method dynamically adjusts the resources available to a processing unit of a distributed computer process executing on a multi-node computer system. The resources for the processing unit are adjusted based on the data other processing units handle or the execution path of code in an upstream or downstream processing unit in the distributed process or application. | 05-16-2013 |
20130124726 | DYNAMIC RESOURCE ADJUSTMENT FOR A DISTRIBUTED PROCESS ON A MULTI-NODE COMPUTER SYSTEM - A method dynamically adjusts the resources available to a processing unit of a distributed computer process executing on a multi-node computer system. The resources for the processing unit are adjusted based on the data other processing units handle or the execution path of code in an upstream or downstream processing unit in the distributed process or application. | 05-16-2013 |
20130139174 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects metrics of the system, nodes, application, jobs and processing units that will be used to determine how to best allocate the jobs on the system. A job optimizer analyzes the collected metrics to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where the processing units are over utilizing the resources on the node. | 05-30-2013 |
20130160136 | DATA SECURITY IN A MULTI-NODAL ENVIRONMENT - A data security manager in a multi-nodal environment enforces processing constraints stored as security relationships that control how different pieces of a multi-nodal application (called execution units) are allowed to execute to insure data security. The security manager preferably checks the security relationships for security violations when new execution units start execution, when data moves to or from an execution unit, and when an execution unit requests external services. Where the security manager determines there is a security violation based on the security relationships, the security manager may move, delay or kill an execution unit to maintain data security. | 06-20-2013 |
20130166617 | ENHANCED BARRIER OPERATOR WITHIN A STREAMING ENVIRONMENT - Techniques are described for processing data. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Additionally, a first processing element in the operator graph includes a barrier operator that joins the output of one or more upstream operators included in one or more of the plurality of processing elements. Embodiments initiate one or more timeout conditions at the barrier operator. Embodiments also determine, at the first processing element, that one or more timeout conditions have been satisfied before data has been received from each of the one or more upstream operators. Upon determining that the one or more timeout conditions have been satisfied, Embodiments generate output data at the barrier operator without the data from at least one of the one or more upstream operators. | 06-27-2013 |
20130166618 | PREDICTIVE OPERATOR GRAPH ELEMENT PROCESSING - Techniques are described for predictively starting a processing element. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Embodiments determine a historical startup time for a first processing element in the operator graph, where, once started, the first processing element begins normal operations once the first processing element has received a requisite amount of data from one or more upstream processing elements. Additionally, embodiments determine an amount of time the first processing element takes to receive the requisite amount of data from the one or more upstream processing elements. The first processing element is then predictively started at a first startup time based on the determined historical startup time and the determined amount of time historically taken to receive the requisite amount of data. | 06-27-2013 |
20130166620 | ENHANCED BARRIER OPERATOR WITHIN A STREAMING ENVIRONMENT - Techniques are described for processing data. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Additionally, a first processing element in the operator graph includes a barrier operator that joins the output of one or more upstream operators included in one or more of the plurality of processing elements. Embodiments initiate one or more timeout conditions at the barrier operator. Embodiments also determine, at the first processing element, that one or more timeout conditions have been satisfied before data has been received from each of the one or more upstream operators. Upon determining that the one or more timeout conditions have been satisfied, Embodiments generate output data at the barrier operator without the data from at least one of the one or more upstream operators. | 06-27-2013 |
20130166888 | PREDICTIVE OPERATOR GRAPH ELEMENT PROCESSING - Techniques are described for predictively starting a processing element. Embodiments receive streaming data to be processed by a plurality of processing elements. An operator graph of the plurality of processing elements that defines at least one execution path is established. Embodiments determine a historical startup time for a first processing element in the operator graph, where, once started, the first processing element begins normal operations once the first processing element has received a requisite amount of data from one or more upstream processing elements. Additionally, embodiments determine an amount of time the first processing element takes to receive the requisite amount of data from the one or more upstream processing elements. The first processing element is then predictively started at a first startup time based on the determined historical startup time and the determined amount of time historically taken to receive the requisite amount of data. | 06-27-2013 |
20130166942 | UNFUSING A FAILING PART OF AN OPERATOR GRAPH - Techniques for managing a fused processing element are described. Embodiments receive streaming data to be processed by a plurality of processing elements. Additionally, an operator graph of the plurality of processing elements is established. The operator graph defines at least one execution path and wherein at least one of the processing elements of the operator graph is configured to receive data from at least one upstream processing element and transmit data to at least one downstream processing element. Embodiments detect an error condition has been satisfied at a first one of the plurality of processing elements, wherein the first processing element contains a plurality of fused operators. At least one of the plurality of fused operators is selected for removal from the first processing element. Embodiments then remove the selected at least one fused operator from the first processing element. | 06-27-2013 |
20130166948 | UNFUSING A FAILING PART OF AN OPERATOR GRAPH - Techniques for managing a fused processing element are described. Embodiments receive streaming data to be processed by a plurality of processing elements. Additionally, an operator graph of the plurality of processing elements is established. The operator graph defines at least one execution path and wherein at least one of the processing elements of the operator graph is configured to receive data from at least one upstream processing element and transmit data to at least one downstream processing element. Embodiments detect an error condition has been satisfied at a first one of the plurality of processing elements, wherein the first processing element contains a plurality of fused operators. At least one of the plurality of fused operators is selected for removal from the first processing element. Embodiments then remove the selected at least one fused operator from the first processing element. | 06-27-2013 |
20130166961 | DETECTING AND RESOLVING ERRORS WITHIN AN APPLICATION - Techniques for managing errors within an application are provided. Embodiments monitor errors occurring in each of a plurality of portions of the application while the application is executing. An error occurring in a first one of the plurality of portions of the application is detected. Additionally, upon detecting the error occurring in the first portion, embodiments determine whether to prevent subsequent executions of the first portion of the application. | 06-27-2013 |
20130166962 | DETECTING AND RESOLVING ERRORS WITHIN AN APPLICATION - Techniques for managing errors within an application are provided. Embodiments monitor errors occurring in each of a plurality of portions of the application while the application is executing. An error occurring in a first one of the plurality of portions of the application is detected. Additionally, upon detecting the error occurring in the first portion, embodiments determine whether to prevent subsequent executions of the first portion of the application. | 06-27-2013 |
20130173636 | DETERMINING A SCORE FOR A PRODUCT BASED ON A LOCATION OF THE PRODUCT - A method, computer-readable storage medium, and computer system are provided. In an embodiment, a request is received from a requestor. The request specifies a search term and a plurality of weights of a plurality of criteria. A plurality of products are found that satisfy the search term. A plurality of locations where the plurality of products are located are determined. A plurality of scores of the plurality of locations are calculated based on the plurality of weights of the plurality of criteria and a plurality of ratings of the plurality of criteria at the plurality of locations. A best product of the plurality of products located at a best location with a best score of the plurality of scores is selected. In an embodiment, a supplier of the product that is not selected as the best product is notified of the score. | 07-04-2013 |
20130179585 | TRIGGERING WINDOW CONDITIONS BY STREAMING FEATURES OF AN OPERATOR GRAPH - In a stream computing application, data may be transmitted between operators using tuples. However, the receiving operator may not evaluate these tuples as they arrive but instead wait to evaluate a group of tuples—i.e., a window. A window is typically triggered when a buffer associated with the receiving operator reaches a maximum window size or when a predetermined time period has expired. Additionally, a window may be triggered by a monitoring a tuple rate—i.e., the rate at which the operator receives the tuples. If the tuple rate exceeds or falls below a threshold, a window may be triggered. Further, the number of exceptions, or the rate at which an operator throws exceptions, may be monitored. If either of these parameters satisfies a threshold, a window may be triggered, thereby instructing an operator to evaluate the tuples contained within the window. | 07-11-2013 |
20130179586 | TRIGGERING WINDOW CONDITIONS USING EXCEPTION HANDLING - In a stream computing application, data may be transmitted between operators using tuples. However, the receiving operator may not evaluate these tuples as they arrive but instead wait to evaluate a group of tuples—i.e., a window. A window is typically triggered when a buffer associated with the receiving operator reaches a maximum window size or when a predetermined time period has expired. Additionally, a window may be triggered by a monitoring a tuple rate—i.e., the rate at which the operator receives the tuples. If the tuple rate exceeds or falls below a threshold, a window may be triggered. Further, the number of exceptions, or the rate at which an operator throws exceptions, may be monitored. If either of these parameters satisfies a threshold, a window may be triggered, thereby instructing an operator to evaluate the tuples contained within the window. | 07-11-2013 |
20130179591 | TRIGGERING WINDOW CONDITIONS BY STREAMING FEATURES OF AN OPERATOR GRAPH - In a stream computing application, data may be transmitted between operators using tuples. However, the receiving operator may not evaluate these tuples as they arrive but instead wait to evaluate a group of tuples—i.e., a window. A window is typically triggered when a buffer associated with the receiving operator reaches a maximum window size or when a predetermined time period has expired. Additionally, a window may be triggered by a monitoring a tuple rate—i.e., the rate at which the operator receives the tuples. If the tuple rate exceeds or falls below a threshold, a window may be triggered. Further, the number of exceptions, or the rate at which an operator throws exceptions, may be monitored. If either of these parameters satisfies a threshold, a window may be triggered, thereby instructing an operator to evaluate the tuples contained within the window. | 07-11-2013 |
20130179809 | SMART DISPLAY - A smart display allows a user to build custom layouts of user interface blocks on the smart display independent of the software on the computer creating the user interface. A customization mechanism in the smart display allows a user to select portions of a user interface and move them to different positions on the display. The customization mechanism creates custom layout metadata that defines a screen offset for portions of a user interface moved by the user. The smart display monitors the incoming display data and re-assigns pixel rendering data to the new location in the moved user interface blocks as the data coming from the computer application changes. | 07-11-2013 |
20130198318 | PROCESSING ELEMENT MANAGEMENT IN A STREAMING DATA SYSTEM - Stream applications may inefficiently use the hardware resources that execute the processing elements of the data stream. For example, a compute node may host four processing elements and execute each using a CPU. However, other CPUs on the compute node may sit idle. To take advantage of these available hardware resources, a stream programmer may identify one or more processing elements that may be cloned. The cloned processing elements may be used to generate a different execution path that is parallel to the execution path that includes the original processing elements. Because the cloned processing elements contain the same operators as the original processing elements, the data stream that was previously flowing through only the original processing element may be split and sent through both the original and cloned processing elements. In this manner, the parallel execution path may use underutilized hardware resources to increase the throughput of the data stream. | 08-01-2013 |
20130198366 | DEPLOYING AN EXECUTABLE WITH HISTORICAL PERFORMANCE DATA - Techniques for incorporating performance data into an executable file for an application are described. Embodiments monitor performance of an application while the application is running. Additionally, historical execution characteristics of the application are determined based upon the monitored performance and one or more system characteristics of a node on which the application was executed on. Embodiments also incorporate the historical execution characteristics into executable file for the application, such that the historical execution characteristics can be used to manage subsequent executions of the application. | 08-01-2013 |
20130198371 | DEPLOYING AN EXECUTABLE WITH HISTORICAL PERFORMANCE DATA - Techniques for incorporating performance data into an executable file for an application are described. Embodiments monitor performance of an application while the application is running. Additionally, historical execution characteristics of the application are determined based upon the monitored performance and one or more system characteristics of a node on which the application was executed on. Embodiments also incorporate the historical execution characteristics into executable file for the application, such that the historical execution characteristics can be used to manage subsequent executions of the application. | 08-01-2013 |
20130198389 | DYNAMIC RESOURCE ADJUSTMENT FOR A DISTRIBUTED PROCESS ON A MULTI-NODE COMPUTER SYSTEM - A method dynamically adjusts the resources available to a processing unit of a distributed computer process executing on a multi-node computer system. The resources for the processing unit are adjusted based on the data other processing units handle or the execution path of code in an upstream or downstream processing unit in the distributed process or application. | 08-01-2013 |
20130198489 | PROCESSING ELEMENT MANAGEMENT IN A STREAMING DATA SYSTEM - Stream applications may inefficiently use the hardware resources that execute the processing elements of the data stream. For example, a compute node may host four processing elements and execute each using a CPU. However, other CPUs on the compute node may sit idle. To take advantage of these available hardware resources, a stream programmer may identify one or more processing elements that may be cloned. The cloned processing elements may be used to generate a different execution path that is parallel to the execution path that includes the original processing elements. Because the cloned processing elements contain the same operators as the original processing elements, the data stream that was previously flowing through only the original processing element may be split and sent through both the original and cloned processing elements. In this manner, the parallel execution path may use underutilized hardware resources to increase the throughput of the data stream. | 08-01-2013 |
20130254777 | DYNAMIC RUN TIME ALLOCATION OF DISTRIBUTED JOBS WITH APPLICATION SPECIFIC METRICS - A job optimizer dynamically changes the allocation of processing units on a multi-nodal computer system. A distributed application is organized as a set of connected processing units. The arrangement of the processing units is dynamically changed at run time to optimize system resources and interprocess communication. A collector collects application specific metrics determined by application plug-ins. A job optimizer analyzes the collected metrics and determines how to dynamically arrange the processing units within the jobs. The job optimizer may determine to combine multiple processing units into a job on a single node when there is an overutilization of an interprocess communication between processing units. Alternatively, the job optimizer may determine to split a job's processing units into multiple jobs on different nodes where one or more of the processing units are over utilizing the resources on the node. | 09-26-2013 |
20130290394 | MONITORING STREAMS BUFFERING TO OPTIMIZE OPERATOR PROCESSING - Method, system and computer program product for performing an operation, including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples, establishing an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator, providing each operator a buffer configured to hold data tuples requiring processing by the respective operator, wherein the buffer is a first-in-first-out buffer, receiving a plurality of data tuples in a buffer associated with an operator, the data tuples comprising at least one attribute, selecting at least one data tuple from the first buffer, examining an attribute of the selected data tuples to identify a candidate tuple, and performing a second operation on the candidate tuple. | 10-31-2013 |
20130290489 | MONITORING STREAMS BUFFERING TO OPTIMIZE OPERATOR PROCESSING - Method, system and computer program product for performing an operation, including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples, establishing an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator, providing each operator a buffer configured to hold data tuples requiring processing by the respective operator, wherein the buffer is a first-in-first-out buffer, receiving a plurality of data tuples in a buffer associated with an operator, the data tuples comprising at least one attribute, selecting at least one data tuple from the first buffer, examining an attribute of the selected data tuples to identify a candidate tuple, and performing a second operation on the candidate tuple. | 10-31-2013 |
20130290966 | OPERATOR GRAPH CHANGES IN RESPONSE TO DYNAMIC CONNECTIONS IN STREAM COMPUTING APPLICATIONS - A stream computing application may permit one job to connect to a data stream of a different job. As more and more jobs dynamically connect to the data stream, the connections may have a negative impact on the performance of the job that generates the data stream. Accordingly, a variety of metrics and statistics (e.g., CPU utilization or tuple rate) may be monitored to determine if the dynamic connections are harming performance. If so, the stream computing system may be optimized to mitigate the effects of the dynamic connections. For example, particular operators may be unfused from a processing element and moved to a compute node that has available computing resources. Additionally, the stream computing application may clone the data stream in order to distribute the workload of transmitting the data stream to the connected jobs. | 10-31-2013 |
20130290969 | OPERATOR GRAPH CHANGES IN RESPONSE TO DYNAMIC CONNECTIONS IN STREAM COMPUTING APPLICATIONS - A stream computing application may permit one job to connect to a data stream of a different job. As more and more jobs dynamically connect to the data stream, the connections may have a negative impact on the performance of the job that generates the data stream. Accordingly, a variety of metrics and statistics (e.g., CPU utilization or tuple rate) may be monitored to determine if the dynamic connections are harming performance. If so, the stream computing system may be optimized to mitigate the effects of the dynamic connections. For example, particular operators may be unfused from a processing element and moved to a compute node that has available computing resources. Additionally, the stream computing application may clone the data stream in order to distribute the workload of transmitting the data stream to the connected jobs. | 10-31-2013 |
20130305032 | ANONYMIZATION OF DATA WITHIN A STREAMS ENVIRONMENT - Streams applications may decrypt encrypted data even though the decrypted data is not used by an operator. Operator properties are defined to permit decryption of data within the operator based on a number of criteria. By limiting the number of operators that decrypt encrypted data, the anonymous nature of the data is further preserved. Operator properties also indicate whether an operator should send encrypted or decrypted data to a downstream operator. | 11-14-2013 |
20130305034 | ANONYMIZATION OF DATA WITHIN A STREAMS ENVIRONMENT - Streams applications may decrypt encrypted data even though the decrypted data is not used by an operator. Operator properties are defined to permit decryption of data within the operator based on a number of criteria. By limiting the number of operators that decrypt encrypted data, the anonymous nature of the data is further preserved. Operator properties also indicate whether an operator should send encrypted or decrypted data to a downstream operator. | 11-14-2013 |
20130305225 | STREAMS DEBUGGING WITHIN A WINDOWING CONDITION - Method, system and computer program product for performing an operation, the operation including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples. The operation then establishes an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator of the plurality of operators is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator. The operation then defines a breakpoint, the breakpoint comprising a condition, the condition based on attribute values of data tuples in a window of at least one operator, the window comprising a plurality of data tuples in an operator. The operation, upon detecting occurrence of the condition, triggers the breakpoint to halt processing by each of the plurality of operators in the operator graph. | 11-14-2013 |
20130305227 | STREAMS DEBUGGING WITHIN A WINDOWING CONDITION - Method product for performing an operation, the operation including providing a plurality of processing elements comprising one or more operators, the operators configured to process streaming data tuples. The operation then establishes an operator graph of multiple operators, the operator graph defining at least one execution path in which a first operator of the plurality of operators is configured to receive data tuples from at least one upstream operator and transmit data tuples to at least one downstream operator. The operation then defines a breakpoint, the breakpoint comprising a condition, the condition based on attribute values of data tuples in a window of at least one operator, the window comprising a plurality of data tuples in an operator. The operation, upon detecting occurrence of the condition, triggers the breakpoint to halt processing by each of the plurality of operators in the operator graph. | 11-14-2013 |
20140089351 | HANDLING OUT-OF-SEQUENCE DATA IN A STREAMING ENVIRONMENT - Computer-implemented method, system, and computer program product for processing data in an out-of-order manner in a streams computing environment. A windowing condition is defined such that incoming data tuples are processed within a specified time or count of each other. Additionally, the windowing condition may be based on a specified attribute of the data tuples. If the tuples are not processed within the constraints specified by the windowing condition, the unprocessed tuples may be discarded, i.e., not processed, to optimize operator performance. | 03-27-2014 |
20140089352 | HANDLING OUT-OF-SEQUENCE DATA IN A STREAMING ENVIRONMENT - Computer-implemented method, system, and computer program product for processing data in an out-of-order manner in a streams computing environment. A windowing condition is defined such that incoming data tuples are processed within a specified time or count of each other. Additionally, the windowing condition may be based on a specified attribute of the data tuples. If the tuples are not processed within the constraints specified by the windowing condition, the unprocessed tuples may be discarded, i.e., not processed, to optimize operator performance. | 03-27-2014 |
20140089373 | DYNAMIC STREAM PROCESSING WITHIN AN OPERATOR GRAPH - A method and system for processing a stream of tuples in a stream-based application is disclosed. The method may include a first stream operator determining whether a requirement to modify processing of a first tuple at a second stream operator exists. The method may provide for associating an indication to modify processing of the first tuple at the second stream operator if the requirement exists. | 03-27-2014 |
20140089929 | DYNAMIC STREAM PROCESSING WITHIN AN OPERATOR GRAPH - A method and system for processing a stream of tuples in a stream-based application is disclosed. The method may include a first stream operator determining whether a requirement to modify processing of a first tuple at a second stream operator exists. The method may provide for associating an indication to modify processing of the first tuple at the second stream operator if the requirement exists. | 03-27-2014 |
20140095503 | COMPILE-TIME GROUPING OF TUPLES IN A STREAMING APPLICATION - A system and a method for initializing a streaming application are disclosed. The method may include initializing a streaming application for execution on one or more compute nodes which are adapted to execute one or more stream operators. The method may, during a compiling of code, identify whether a processing condition exists at a first stream operator of a plurality of stream operators. The method may add a grouping condition to a second stream operator of the plurality of stream operators if the processing condition exists. The method may provide for the second stream operator to group tuples for sending to the first stream operator. | 04-03-2014 |
20140095506 | COMPILE-TIME GROUPING OF TUPLES IN A STREAMING APPLICATION - A system and a method for initializing a streaming application are disclosed. The method may include initializing a streaming application for execution on one or more compute nodes which are adapted to execute one or more stream operators. The method may, during a compiling of code, identify whether a processing condition exists at a first stream operator of a plurality of stream operators. The method may add a grouping condition to a second stream operator of the plurality of stream operators if the processing condition exists. The method may provide for the second stream operator to group tuples for sending to the first stream operator. | 04-03-2014 |
20140122557 | RUNTIME GROUPING OF TUPLES IN A STREAMING APPLICATION - A system and method for modifying the processing within a streaming application are disclosed. The method may include identifying a grouping location at which it may be possible to group tuples during the runtime execution of a streaming application. In some embodiments, this may include identifying locations at which a runtime grouping condition may be added to one or more stream operators without adversely affecting the performance of a streaming application. The method may add a runtime grouping condition to a processing location within the plurality of stream operators of a streaming application, in some embodiments. | 05-01-2014 |
20140122559 | RUNTIME GROUPING OF TUPLES IN A STREAMING APPLICATION - A system and method for modifying the processing within a streaming application are disclosed. The method may include identifying a grouping location at which it may be possible to group tuples during the runtime execution of a streaming application. In some embodiments, this may include identifying locations at which a runtime grouping condition may be added to one or more stream operators without adversely affecting the performance of a streaming application. The method may add a runtime grouping condition to a processing location within the plurality of stream operators of a streaming application, in some embodiments. | 05-01-2014 |
20140136175 | IDENTIFYING AND ROUTING POISON TUPLES IN A STREAMING APPLICATION - A method for processing a stream of tuples may comprise receiving a stream of tuples to be processed by a plurality of processing elements operating on one or more computer processors. In addition, the method may include generating a model of performance for processing the stream of tuples at runtime, wherein one or more tuples from the stream of tuples potentially cause adverse performance. Further, the method may comprise predicting a parameter for a tuple from the stream of tuples, the parameter indicating a potential for adverse performance, the predicting including using the model. The method may also include modifying processing of the tuple if the parameter falls outside a threshold. | 05-15-2014 |
20140136176 | IDENTIFYING AND ROUTING POISON TUPLES IN A STREAMING APPLICATION - A method for processing a stream of tuples may comprise receiving a stream of tuples to be processed by a plurality of processing elements operating on one or more computer processors. In addition, the method may include generating a model of performance for processing the stream of tuples at runtime, wherein one or more tuples from the stream of tuples potentially cause adverse performance. Further, the method may comprise predicting a parameter for a tuple from the stream of tuples, the parameter indicating a potential for adverse performance, the predicting including using the model. The method may also include modifying processing of the tuple if the parameter falls outside a threshold. | 05-15-2014 |
20140136723 | STREAMS OPTIONAL EXECUTION PATHS DEPENDING UPON DATA RATES - Processing elements in a streaming application may contain one or more optional code modules—i.e., computer-executable code that is executed only if one or more conditions are met. In one embodiment, an optional code module is executed based on evaluating data flow rate between components in the streaming application. As an example, the stream computing application may monitor the incoming data rate between processing elements and select which optional code module to execute based on this rate. For example, if the data rate is high, the stream computing application may choose an optional code module that takes less time to execute. Alternatively, a high data rate may indicate that the incoming data is important; thus, the streaming application may choose an optional code module containing a more rigorous data processing algorithm, even if this algorithm takes more time to execute. | 05-15-2014 |
20140136724 | STREAMS OPTIONAL EXECUTION PATHS DEPENDING UPON DATA RATES - Processing elements in a streaming application may contain one or more optional code modules—i.e., computer-executable code that is executed only if one or more conditions are met. In one embodiment, an optional code module is executed based on evaluating data flow rate between components in the streaming application. As an example, the stream computing application may monitor the incoming data rate between processing elements and select which optional code module to execute based on this rate. For example, if the data rate is high, the stream computing application may choose an optional code module that takes less time to execute. Alternatively, a high data rate may indicate that the incoming data is important; thus, the streaming application may choose an optional code module containing a more rigorous data processing algorithm, even if this algorithm takes more time to execute. | 05-15-2014 |
20140164355 | TUPLE ROUTING IN A STREAMING APPLICATION - A system and method for modifying the processing within a streaming application are disclosed. The method may determine one or more parameters for a tuple at a first stream operator. The one or more parameters may represent a processing history of the tuple at the first stream operator. The method may associate the one or more parameters with the tuple metadata. A second stream operator may modify the processing of the tuple if the parameter falls outside a threshold. | 06-12-2014 |
20140164356 | TUPLE ROUTING IN A STREAMING APPLICATION - A system and method for modifying the processing within a streaming application are disclosed. The method may determine one or more parameters for a tuple at a first stream operator. The one or more parameters may represent a processing history of the tuple at the first stream operator. The method may associate the one or more parameters with the tuple metadata. A second stream operator may modify the processing of the tuple if the parameter falls outside a threshold. | 06-12-2014 |
20140164374 | STREAMING DATA PATTERN RECOGNITION AND PROCESSING - When processing data tuples, operators of a streaming application may identify certain tuples as being relevant. To determine relevant tuples, the operators may, for example, process the received tuples and determine if they meet certain thresholds. If so, the tuples are deemed relevant, but if not they are characterized as irrelevant. The streaming application may use a pattern detector to parse the relevant data tuples to identify a pattern, such as a shared trait between the tuples. Based on this commonality, the pattern detector may generate filtering criteria that may be used to process subsequently received tuples. In one embodiment, the filtering criteria identified by one operator is transmitted to a second operator to be used to process tuples received there. Thus, once one of the operators determines a pattern, the operator generates filtering criteria that another, related operator uses for filtering received tuples. | 06-12-2014 |
20140164434 | STREAMING DATA PATTERN RECOGNITION AND PROCESSING - When processing data tuples, operators of a streaming application may identify certain tuples as being relevant. To determine relevant tuples, the operators may, for example, process the received tuples and determine if they meet certain thresholds. If so, the tuples are deemed relevant, but if not they are characterized as irrelevant. The streaming application may use a pattern detector to parse the relevant data tuples to identify a pattern, such as a shared trait between the tuples. Based on this commonality, the pattern detector may generate filtering criteria that may be used to process subsequently received tuples. In one embodiment, the filtering criteria identified by one operator is transmitted to a second operator to be used to process tuples received there. Thus, once one of the operators determines a pattern, the operator generates filtering criteria that another, related operator uses for filtering received tuples. | 06-12-2014 |
20140164601 | MANAGEMENT OF STREAM OPERATORS WITH DYNAMIC CONNECTIONS - One embodiment is directed to a method for processing a stream of tuples in a stream-based application. A stream operator may receive a stream of tuples. A stream manager may determine whether a dynamic connection exists at a first stream operator. The dynamic connection may connect the first stream operator to a second stream operator. The stream manager may poll the first stream operator and the second stream operator for a presence of the dynamic connection. The stream manager may modify processing of one or more upstream stream operators in response to a change in use of the dynamic connection. | 06-12-2014 |
20140164628 | MANAGEMENT OF STREAM OPERATORS WITH DYNAMIC CONNECTIONS - One embodiment is directed to a method for processing a stream of tuples in a stream-based application. A stream operator may receive a stream of tuples. A stream manager may determine whether a dynamic connection exists at a first stream operator. The dynamic connection may connect the first stream operator to a second stream operator. The stream manager may poll the first stream operator and the second stream operator for a presence of the dynamic connection. The stream manager may modify processing of one or more upstream stream operators in response to a change in use of the dynamic connection. | 06-12-2014 |
20140201648 | DISPLAYING HOTSPOTS IN RESPONSE TO MOVEMENT OF ICONS - In an embodiment, in response to selection of an content icon on a user I/O device, a plurality of candidate recipients of content are determined. In response to movement of the content icon on the user I/O device, a plurality of hotspots are displayed on the user I/O device that represent the plurality of candidate recipients. In response to movement of the content icon over a first hotspot of the plurality of hotspots, content and an identifier of an application that created the content are sent to a target device used by a first candidate recipient represented by the first hotspot. | 07-17-2014 |
20140215165 | MEMORY MANAGEMENT IN A STREAMING APPLICATION - One embodiment is directed to a method for processing a stream of tuples. The method may include receiving a stream of tuples to be processed by a plurality of processing elements operating on one or more computer processors. Each of the processing elements has an associated memory space. In addition, the method may include monitoring the plurality of processing elements. The monitoring may include identifying a first performance metric for a first processing element. The method may include modifying the first processing element based on the first performance metric. The modifying of the first processing element may include employing memory management of the associated memory space. | 07-31-2014 |
20140215184 | MEMORY MANAGEMENT IN A STREAMING APPLICATION - One embodiment is directed to a method for processing a stream of tuples. The method may include receiving a stream of tuples to be processed by a plurality of processing elements operating on one or more computer processors. Each of the processing elements has an associated memory space. In addition, the method may include monitoring the plurality of processing elements. The monitoring may include identifying a first performance metric for a first processing element. The method may include modifying the first processing element based on the first performance metric. The modifying of the first processing element may include employing memory management of the associated memory space. | 07-31-2014 |
20140236920 | STREAMING DELAY PATTERNS IN A STREAMING ENVIRONMENT - The method and system receive streaming data to be processed by a plurality of processing elements comprising one or more stream operators. One embodiment is directed to a method and a system for managing processing in a streaming application. A stream operator may select a delay pattern. The stream operator may compare one or more performance factors from the delay pattern to one or more optimal performance factors. The stream operator may delay the stream of tuples using the delay pattern if the performance factors are determined by the optimal performance factors. | 08-21-2014 |
20140237134 | STREAMING DELAY PATTERNS IN A STREAMING ENVIRONMENT - The method and system receive streaming data to be processed by a plurality of processing elements comprising one or more stream operators. One embodiment is directed to a method and a system for managing processing in a streaming application. A stream operator may select a delay pattern. The stream operator may compare one or more performance factors from the delay pattern to one or more optimal performance factors. The stream operator may delay the stream of tuples using the delay pattern if the performance factors are determined by the optimal performance factors. | 08-21-2014 |
20140258290 | PROCESSING CONTROL IN A STREAMING APPLICATION - A method, system, and computer program product for processing a stream of tuples are disclosed. The method, system, and computer program product may include receiving a stream of tuples to be processed by a plurality of processing elements. Each tuple may have an associated processing history. The stream of tuples may be segmented into a plurality of partitions, each representing a subset of the stream of tuples. The method, system, and computer program product may include estimating the contribution each partition will have on a particular processing result and processing a partition if it substantially contributes to the particular processing result. | 09-11-2014 |
20140258291 | PROCESSING CONTROL IN A STREAMING APPLICATION - A method, system, and computer program product for processing a stream of tuples are disclosed. The method, system, and computer program product may include receiving a stream of tuples to be processed by a plurality of processing elements. Each tuple may have an associated processing history. The stream of tuples may be segmented into a plurality of partitions, each representing a subset of the stream of tuples. The method, system, and computer program product may include estimating the contribution each partition will have on a particular processing result and processing a partition if it substantially contributes to the particular processing result. | 09-11-2014 |
20140278337 | SELECTING AN OPERATOR GRAPH CONFIGURATION FOR A STREAM-BASED COMPUTING APPLICATION - First and second simulated processing of a stream-based computing application using respective first and second simulation conditions may be performed. The first and second simulation conditions may specify first and second operator graph configurations. Each simulated processing may include inputting a stream of test tuples to the stream-based computing application, which may operate on one or more compute nodes. Each compute node may have one or more computer processors and a memory to store one or more processing elements. Each simulated processing may be monitored to determine one or more performance metrics. The first and second simulated processings may be sorted based on a first performance metric to identify a simulated processing having a first rank. An operator graph configuration associated with the simulated processing having the first rank may be selected if the first performance metric for the simulated processing having the first rank is within a processing constraint. | 09-18-2014 |
20140279965 | COMPRESSING TUPLES IN A STREAMING APPLICATION - A method, system, and computer program product to process data in a streaming application are disclosed. The method, system, and computer program product may include receiving a stream of tuples to be processed by a plurality of processing elements operating on a plurality of compute nodes. The method, system, and computer program product may determine whether a first processing element has additional processing capacity. In some embodiments, the method, system, and computer program product determine whether a second processing element, which receives its input from the first processing element, also has additional processing capacity. The method, system, and computer program product may employ compression at the first processing element if one of the first and the second processing element has additional processing capacity. | 09-18-2014 |
20140279968 | COMPRESSING TUPLES IN A STREAMING APPLICATION - A method, system, and computer program product to process data in a streaming application are disclosed. The method, system, and computer program product may include receiving a stream of tuples to be processed by a plurality of processing elements operating on a plurality of compute nodes. The method, system, and computer program product may determine whether a first processing element has additional processing capacity. In some embodiments, the method, system, and computer program product determine whether a second processing element, which receives its input from the first processing element, also has additional processing capacity. The method, system, and computer program product may employ compression at the first processing element if one of the first and the second processing element has additional processing capacity. | 09-18-2014 |
20140280128 | ENDING TUPLE PROCESSING IN A STREAM-BASED COMPUTING APPLICATION - A method includes receiving streaming data to be processed by a plurality of processing elements comprising one or more stream operators. Time metadata may be added to a parent tuple at a first stream operator. A first time metric may be determined for a first child tuple of the parent tuple at a second stream operator. The first time metric may be determined, at least in part, from the time metadata. The second stream operator may receive the first child tuple from the first stream operator. The method may include transmitting a second child tuple of the parent tuple from the second stream operator to a third stream operator if the time metric is inside a time limit. In addition, the method may include ending processing of the first child tuple if the time metric is outside of the time limit. | 09-18-2014 |
20140280895 | EVALUATING A STREAM-BASED COMPUTING APPLICATION - A method for evaluating a stream-based computing application includes specifying a simulation condition. In addition, a stream of test tuples may be input to the stream-based computing application. The stream-based computing application may operate on one or more compute nodes. Each compute node may have one or more computer processors and a memory to store one or more processing elements. The method may also include simulating processing of the stream of test tuples by the processing elements using the simulation condition. Further, the method may include monitoring to determine one or more performance metrics for an inter-stream operator communication path. | 09-18-2014 |
20140289186 | MANAGING ATTRIBUTES IN STREM PROCESSING USING A CACHE - A method and system for managing attributes in a streaming application is disclosed. The system may contain a receiving stream operator that is communicatively coupled with a stream manager. The receiving stream operator may have a capability of storing a selected attribute and creating one or more unique identifiers. The system may contain a cache communicatively coupled with one or more stream operators. The cache may have a capability of storing the selected attributes. The system may also have a retrieving stream operator communicatively coupled with the stream manager. The retrieving stream operator may have a capability of using the unique identifier to access the selected attribute. | 09-25-2014 |
20140289240 | MANAGING ATTRIBUTES IN STREAM PROCESSING - A method and system for managing attributes in a streaming application is disclosed. The system may have a stream manager communicatively coupled with processing elements for tracking a stream of tuples. The system may also have a first stream operator communicatively coupled with the stream manager and capable of receiving the stream of tuples, wherein the first stream operator selects the selected attribute of the first tuple and assigns a first identifier to the selected attribute. The system may also have a second stream operator communicatively coupled with the stream manager and capable of receiving the stream of tuples, and capable of replacing the selected attribute in the second tuple with a second identifier provided by the first stream operator. The system may also have an identifier table communicatively coupled with the stream manager and the first and second stream operator, wherein the identifier table includes identifiers for selected tuples. | 09-25-2014 |
20140317148 | RECONFIGURING AN OPERATOR GRAPH BASED ON ATTRIBUTE USAGE - A first processing element may be initially configured to transmit a first output stream to a second processing element. The second processing element may be initially configured to transmit a second output stream to a third processing element. The tuples of the first and second output streams may have the first and second attributes. It may be determined whether the first attribute is to be first processed at the second processing element (first condition) and whether the second attribute is to be first processed at the third processing element (second condition). When the first and second conditions are met, the first processing element may be reconfigured to transmit a third output stream to the second processing element and a fourth output stream to the third processing element. The third output stream may have only the first attribute. The fourth output stream may have only the second attribute. | 10-23-2014 |
20140317150 | EXITING WINDOWING EARLY FOR STREAM COMPUTING - Two or more tuples to be processed by a processing element operating on one or more computer processors may be received by the processing element. The processing element may have a windowing operator performing a windowing operation to determine a first value at the conclusion of a windowing condition. It may be determined from one or more tuples received within the windowing condition whether a condition to end the windowing operation before the windowing condition concludes is met. In addition, the windowing operation may be ended before the windowing condition concludes when the condition to end the windowing operation is met. | 10-23-2014 |
20140317151 | EXITING WINDOWING EARLY FOR STREAM COMPUTING - Two or more tuples to be processed by a processing element operating on one or more computer processors may be received by the processing element. The processing element may have a windowing operator performing a windowing operation to determine a first value at the conclusion of a windowing condition. It may be determined from one or more tuples received within the windowing condition whether a condition to end the windowing operation before the windowing condition concludes is met. In addition, the windowing operation may be ended before the windowing condition concludes when the condition to end the windowing operation is met. | 10-23-2014 |
20140317304 | RUNTIME TUPLE ATTRIBUTE COMPRESSION - A method, system, and computer program product for initializing a stream computing application are disclosed. The method may include receiving a plurality of tuples to be processed by one or more processing elements operating on one or more computer processors. Each processing element may have one or more stream operators. The method may also include determining a first attribute to be processed at a first stream operator that is configured to transmit a tuple having the first attribute along an execution path including at least one intervening stream operator to a second stream operator. The method may include compressing the first attribute when the first attribute is to be next processed by the second stream operator. | 10-23-2014 |
20140317305 | COMPILE-TIME TUPLE ATTRIBUTE COMPRESSION - A method, system, and computer program product for initializing a stream computing application are disclosed. The method may include, during a compiling of code, determining whether an attribute of a tuple to be processed at a first stream operator is to be next processed at a second stream operator. The first stream operator may be configured to transmit the tuple along an execution path to the second stream operator. The execution path includes one or more intervening stream operators between the first and second stream operators. The method may invoke a compression condition when the first attribute of the tuple to be processed at the first stream operator is to be next processed at the second stream operator. | 10-23-2014 |
20140365612 | MONITORING SIMILAR DATA IN STREAM COMPUTING - A method, system, and computer program product for monitoring similar data in stream computing are disclosed. The method may include, monitoring at least one input stream of tuples to be processed by an application. The application may comprise one or more processing elements operating on one or more computer processors and each tuple is an instance of data. The method may also include, identifying a first tuple in the input stream and the first tuple is a first instance of first data. Also, the method may include, identifying a second tuple in the input stream and the second tuple is a second instance of first data. Furthermore, the method may include, determining that the second tuple satisfies criteria for superseding the first tuple and eliminating the first tuple from the application. | 12-11-2014 |
20140365614 | MONITORING SIMILAR DATA IN STREAM COMPUTING - A method, system, and computer program product for monitoring similar data in stream computing are disclosed. The method may include, monitoring at least one input stream of tuples to be processed by an application. The application may comprise one or more processing elements operating on one or more computer processors and each tuple is an instance of data. The method may also include, identifying a first tuple in the input stream and the first tuple is a first instance of first data. Also, the method may include, identifying a second tuple in the input stream and the second tuple is a second instance of first data. Furthermore, the method may include, determining that the second tuple satisfies criteria for superseding the first tuple and eliminating the first tuple from the application. | 12-11-2014 |
20140372431 | GENERATING DIFFERENCES FOR TUPLE ATTRIBUTES - A sequence of tuples, each having one or more attributes, is received at one of one or more processing elements operating on one or more processors. Each processing element may have one or more stream operators. A first stream operator may be identified as one that only processes an instance of a first attribute in a currently received tuple when a difference between an instance of the first attribute in a previously received tuple and the instance of the first attribute in the currently received tuple is outside of a difference threshold. A second stream operator may generate a difference attribute from a first instance of the first attribute in a first one of the received tuples and a second instance of the first attribute in a second one of the received tuples. The difference attribute may be transmitted from the second stream operator to the first stream operator. | 12-18-2014 |
20140373019 | GENERATING DIFFERENCES FOR TUPLE ATTRIBUTES - A sequence of tuples, each having one or more attributes, is received at one of one or more processing elements operating on one or more processors. Each processing element may have one or more stream operators. A first stream operator may be identified as one that only processes an instance of a first attribute in a currently received tuple when a difference between an instance of the first attribute in a previously received tuple and the instance of the first attribute in the currently received tuple is outside of a difference threshold. A second stream operator may generate a difference attribute from a first instance of the first attribute in a first one of the received tuples and a second instance of the first attribute in a second one of the received tuples. The difference attribute may be transmitted from the second stream operator to the first stream operator. | 12-18-2014 |
20140379711 | MANAGING PASSTHRU CONNECTIONS ON AN OPERATOR GRAPH - Embodiments of the disclosure provide a method, system, and computer program product for processing data such as a stream of tuples. Each tuple can contain one or more attributes. The method can include processing the attributes of the stream of tuples using stream operators operating on one or more computer processors and corresponding to one or more processing elements. The method can also include detecting an indicative element from a plurality of stream operators. The method can also include transmitting, in response to detecting the indicative element, a passthru command to a processing element corresponding to the indicative element. The method can also include altering, in response to receiving the passthru command at the processing element, a portion of attribute processing for the indicative element. | 12-25-2014 |
20150074108 | ENDING TUPLE PROCESSING IN A STREAM-BASED COMPUTING APPLICATION - A method includes receiving streaming data to be processed by a plurality of processing elements comprising one or more stream operators. Time metadata may be added to a parent tuple at a first stream operator. A first time metric may be determined for a first child tuple of the parent tuple at a second stream operator. The first time metric may be determined, at least in part, from the time metadata. The second stream operator may receive the first child tuple from the first stream operator. The method may include transmitting a second child tuple of the parent tuple from the second stream operator to a third stream operator if the time metric is inside a time limit. In addition, the method may include ending processing of the first child tuple if the time metric is outside of the time limit. | 03-12-2015 |
20150081693 | MANAGING DATA PATHS IN AN OPERATOR GRAPH - Embodiments of the disclosure provide a method and, system for processing data such as a stream of tuples. The method can include receiving the stream of tuples to be processed by a plurality of stream operators operating on one or more computer processors. The method can include creating an overflow path that includes at least one stream operator that performs processing duplicative to at least one stream operator from the plurality of stream operators. The method can include monitoring a stream operator for a triggering condition. The method can include identifying a tuple from the stream of tuples to process on the overflow path. The method can include processing, on the overflow path, the identified tuple from the stream of tuples in response to the presence of the triggering condition. | 03-19-2015 |
20150081707 | MANAGING A GROUPING WINDOW ON AN OPERATOR GRAPH - Embodiments of the disclosure provide a method, system, and computer program product for managing a windowing operation. The method can include determining a sentinel value that defines a start of a grouping window for a stream of tuples and a terminating sentinel value that defines the end of the grouping window based upon an attribute contained in the stream of tuples. The stream of tuples can be monitored for the sentinel value and the terminating sentinel value by a stream operator. The stream operator can initiate a windowing operation that defines the start of the grouping window in response to a presence of the sentinel value and terminate the windowing operation in response to a presence of the terminating sentinel value. | 03-19-2015 |
20150081708 | MANAGING A GROUPING WINDOW ON AN OPERATOR GRAPH - Embodiments of the disclosure provide a method, system, and computer program product for managing a windowing operation. The method can include determining a sentinel value that defines a start of a grouping window for a stream of tuples and a terminating sentinel value that defines the end of the grouping window based upon an attribute contained in the stream of tuples. The stream of tuples can be monitored for the sentinel value and the terminating sentinel value by a stream operator. The stream operator can initiate a windowing operation that defines the start of the grouping window in response to a presence of the sentinel value and terminate the windowing operation in response to a presence of the terminating sentinel value. | 03-19-2015 |
20150081879 | MANAGING DATA PATHS IN AN OPERATOR GRAPH - Embodiments of the disclosure provide a method and, system for processing data such as a stream of tuples. The method can include receiving the stream of tuples to be processed by a plurality of stream operators operating on one or more computer processors. The method can include creating an overflow path that includes at least one stream operator that performs processing duplicative to at least one stream operator from the plurality of stream operators. The method can include monitoring a stream operator for a triggering condition. The method can include identifying a tuple from the stream of tuples to process on the overflow path. The method can include processing, on the overflow path, the identified tuple from the stream of tuples in response to the presence of the triggering condition. | 03-19-2015 |
20150088887 | MANAGING MULTIPLE WINDOWS ON AN OPERATOR GRAPH - Embodiments of the disclosure provide a method, system, and computer program product for managing a windowing operation. The method for grouping processing of a stream of tuples with each tuple containing one or more attributes can include receiving the stream of tuples to be processed by a plurality of processing elements operating on one or more computer processors. The method can also include processing, with a first processing method, a group of tuples from the stream of tuples into a grouping window. The method can also include processing, with a second processing method, a subgroup of tuples from the group of tuples into a subgrouping window. The second processing method can include identifying a sub-membership condition. | 03-26-2015 |
20150088889 | MANAGING MULTIPLE WINDOWS ON AN OPERATOR GRAPH - Embodiments of the disclosure provide a method, system, and computer program product for managing a windowing operation. The method for grouping processing of a stream of tuples with each tuple containing one or more attributes can include receiving the stream of tuples to be processed by a plurality of processing elements operating on one or more computer processors. The method can also include processing, with a first processing method, a group of tuples from the stream of tuples into a grouping window. The method can also include processing, with a second processing method, a subgroup of tuples from the group of tuples into a subgrouping window. The second processing method can include identifying a sub-membership condition. | 03-26-2015 |