Patent application number | Description | Published |
20100161676 | LIFECYCLE MANAGEMENT AND CONSISTENCY CHECKING OF OBJECT MODELS USING APPLICATION PLATFORM TOOLS - A method includes a data structure comprising a status and action management schema associated with an object model development lifecycle. A status and action management service operates to determine a lifecycle state of a first object model based on the status and action management schema, receive a request to perform a lifecycle action on the first object model, determine, based on the lifecycle state and the status and action management schema, whether the lifecycle action is allowed to be performed, and, if so, allow the lifecycle action to be performed. | 06-24-2010 |
20100161682 | METADATA MODEL REPOSITORY - A system includes a data structure comprising a business object metadata model describing a generic business object model, executable program code of a transactional service to create a second data structure comprising a specific business object model based on the business object metadata model, and a persistent storage to store the second data structure comprising the specific business object model. Some aspects include creation of an electronic data structure comprising a business object metadata model describing a generic business object model, execution, using a processor, of program code of a transactional service to create a second electronic data structure comprising a specific business object model based on the business object metadata model, and storage of the second electronic data structure comprising the specific business object model in a persistent storage. | 06-24-2010 |
20110087708 | BUSINESS OBJECT BASED OPERATIONAL REPORTING AND ANALYSIS - Methods and systems are described that involve holistic and flexible operational reporting that does not require transformation of the underlying model or data harmonization since all business data and business logic of standard business processes are modeled and exposed in a standardized way using domain specific language and the operational reports are modeled with the same meta-model as the business data. A user can simply create a given operational report by selecting needed reporting elements of one or more business objects, run the report, and see the results. | 04-14-2011 |
20110179397 | SYSTEMS AND METHODS FOR METAMODEL TRANSFORMATION - Some aspects relate to systems and methods to receive a first metamodel conforming to a first meta-metamodel associated with first modeling unit types. A second metamodel conforming to a second meta-metamodel is generated based on the first metamodel and on a mapping between the first meta-metamodel and the second meta-metamodel, where the second meta-metamodel is associated with second modeling unit types, and where the first modeling unit types are different from the second modeling unit types. | 07-21-2011 |
20110295896 | SYSTEMS AND METHODS FOR EXECUTING A NAVIGATION QUERY - Systems and methods consistent with the invention may include receiving a navigation query including input text, determining, via a processor, whether the input text is satisfies a predetermined criteria, generating a response including data representing a screen associated with the input text when the input text satisfies the predetermined criteria, selecting a language preference when the input text fails to satisfy the predetermined criteria, performing a fuzzy search based on the input text, the language preference, and usage history, and generating a response to the navigation query based on a result of the fuzzy search. | 12-01-2011 |
20120030256 | Common Modeling of Data Access and Provisioning for Search, Query, Reporting and/or Analytics - A system includes first metadata defining a business object object model, and second metadata defining a first object model to define a query on the business object object model. The first object model is an instance of a business object view metadata model, and the business object object model is an instance of a business object metadata model. | 02-02-2012 |
20120174013 | ADD AND COMBINE REPORTS - A system may include reception of a selection of a first report, the first report based on a first data source defining a first plurality of fields and defining a first at least one key figure, the first report including at least one of the first at least one key figures and at least one of the first plurality of fields, presentation of a first graphical representation of the first data source, the first graphical representation comprising a first graphical icon representing the first at least one key figure, and at least one second graphical icon, each of the at least one second graphical icons representing a respective one of the at least one of the first plurality of fields of the first report, reception of a selection of a second report, the second report based on a second data source defining a second plurality of fields and defining a second at least one key figure, the second report including at least one of the second at least one key figures and at least one of the second plurality of fields, presentation of a second graphical representation of the second data source, the second graphical representation graphically linked to the first graphical representation and comprising a third graphical icon representing the second at least one key figure and a plurality of fourth graphical icons, each of the plurality of fourth graphical icons representing a respective one of the second plurality of fields, reception of a selection of one of the plurality of fourth graphical icons representing one of the second plurality of fields, and generation of a third report comprising the at least one of the first plurality of fields and the one of the second plurality of fields. | 07-05-2012 |
20150081744 | METADATA MODEL REPOSITORY - A system includes a data structure comprising a business object metadata model describing a generic business object model, executable program code of a transactional service to create a second data structure comprising a specific business object model based on the business object metadata model, and a persistent storage to store the second data structure comprising the specific business object model. Some aspects include creation of an electronic data structure comprising a business object metadata model describing a generic business object model, execution, using a processor, of program code of a transactional service to create a second electronic data structure comprising a specific business object model based on the business object metadata model, and storage of the second electronic data structure comprising the specific business object model in a persistent storage. | 03-19-2015 |
20150149258 | ENTERPRISE PERFORMANCE MANAGEMENT PLANNING OPERATIONS AT AN ENTERPRISE DATABASE - According to some embodiments, input data may be received from a data source in an enterprise database in accordance with an enterprise performance management planning model, stored by a processor at the enterprise database. An operation may then be performed on the input data to produce a result. The result may then be stored in a data target, wherein the data target points to a data holding entity in an instantiation of a plan data container at the enterprise database. | 05-28-2015 |
Patent application number | Description | Published |
20130166495 | GENERATING A COMPILER INFRASTRUCTURE - In an embodiment, the compiler infrastructure allows execution of multidimensional analytical metadata from various databases by providing a generic transformation. A compilation request to execute a multidimensional analytical metadata is received. A type of the compilation request is determined to identify an associated transformation and corresponding transformation rules. Based upon the type of compilation request, a database of an application server is queried to retrieve the corresponding multidimensional analytical metadata. Based upon the identified transformation rules, the multidimensional analytical metadata is transformed into a generic metadata that is executable by any desired engine. An instance of a calculation scenario is generated based upon the transformation. The compiler infrastructure is generated by deploying the instance of the calculation scenario in the desired engine (e.g. in-memory computing engine.) | 06-27-2013 |
20130166496 | EXECUTING RUNTIME CALLBACK FUNCTIONS - In an embodiment, a runtime callback function is a part of a code that is invoked upon execution of an associated function. To execute the runtime callback function associated with an in-memory computing engine, multidimensional analytical metadata associated with an application server is received and transformed into an in-memory executable metadata, to generate an instance of an in-memory executable calculation scenario. The instance of the in-memory executable calculation scenario is analyzed to determine process callbacks associated with nodes of the in-memory executable calculation scenario. Based upon the determined process callbacks, the runtime callback function is executed by executing a selection callback at the nodes and a transformation callback at part providers associated with the in-memory executable calculation scenario. | 06-27-2013 |
20130166497 | DYNAMIC RECREATION OF MULTIDIMENSIONAL ANALYTICAL DATA - According to one aspect of systems and methods for dynamic recreation of multidimensional analytical data, lost sets of calculation scenarios that provide multidimensional analytical data results after aggregations and transformations of the multidimensional analytical data are recreated in the volatile storage of an in-memory computing engine. A multidimensional analytical data view (MDAV) compiler is triggered to read the MDAV metadata stored in an intermediate buffer in the MDAV compiler. The read MDAV metadata is compiled into a calculation scenario including calculation view metadata. The calculation view metadata is stored in the intermediate buffer. The recreated set of calculation scenarios is deployed on the in-memory computing engine. | 06-27-2013 |
20130166892 | GENERATING A RUNTIME FRAMEWORK - In an embodiment, the runtime framework is responsible for executing multidimensional analytical metadata in a runtime environment that is determined by the runtime framework. To generate such a runtime framework, the received multidimensional analytical metadata is analyzed to determine a type of an associated calculation pattern. Based upon the type, subsets of the multidimensional analytical metadata and corresponding runtime decision rules are determined. To execute the subsets, executable conditions corresponding to the multidimensional analytical metadata are identified. Based upon the executable conditions, the calculation pattern associated with the multidimensional analytical metadata is executed by executing the associated subsets, and the runtime framework is generated. The runtime framework determines calculation scenario executable subsets and calculation scenario inexecutable subsets that are associated with the multidimensional analytical metadata, and executes the subsets in their respective engines. | 06-27-2013 |
20130290292 | Augmented Query Optimization by Data Flow Graph Model Optimizer - A query is received, and in response, an initial data flow graph is generated that includes a plurality of nodes for executing the query with at least one of the nodes having at least one associated hint. The initial data flow graph is subsequently optimized using a model optimizer having a rules engine using a plurality rules to optimize the initial data flow graph. The at least one associated hint is used by the model optimizer to change how at least one of the plurality of rules is applied. Thereafter, execution of the query is initiated using the optimized data flow graph. Related apparatus, systems, techniques and articles are also described. | 10-31-2013 |
20130290293 | Calculating Count Distinct Using Vertical Unions - A query statement is received that specifies a count distinct. Thereafter, a data flow graph that comprises a plurality of nodes for executing the query is generated. The nodes provide aggregation operations, sorting of results on join attributes and vertically appending columns of count distinct results with intermediate results from at least one of the aggregation operations. Thereafter, execution of the query is initiated using the data flow graph. Related apparatus, systems, techniques and articles are also described. | 10-31-2013 |
20130290297 | Rule-Based Extendable Query Optimizer - A query is received which causes an initial data flow graph that includes a plurality of nodes that are used to execute the query is generated. Thereafter, the initial data flow graph is optimized using a model optimizer that includes an optimizer framework and an application programming interface (API). The optimizer framework provides logic to restructure the initial data flow graph and a rules engine for executing one or more optimization rules. The API allows for registration of new optimization rules to be executed by the rules engine. Execution of the query is then initiated using the optimized data flow graph. Related apparatus, systems, techniques and articles are also described. | 10-31-2013 |
20130290298 | Data Flow Graph Optimization Using Adaptive Rule Chaining - A query is received and an initial data flow graph comprising a plurality of nodes is generated for executing the query. The initial data flow graph is optimized using a model optimizer that accesses at least one of a plurality of patterns to identify a matching pattern and executes at least one optimization rule associated with a matching pattern. Execution of the query is then initiated using the optimized data flow graph. Related apparatus, systems, techniques and articles are also described. | 10-31-2013 |
20130290354 | Calculation Models Using Annotations For Filter Optimization - A query statement is received that requires at least one calculated attribute. Thereafter, a data flow graph is generated that includes a plurality of nodes for executing the query. At least one of the nodes corresponds to the at least one calculated attribute and has at least one level of child nodes. The data flow graph is generated by generating at least one filter for each of the nodes corresponding to the at least one calculated attribute and by pushing down the generated filters to a corresponding child node. Once the data flow graph is generated, execution of the query can be initiated using the generated data flow graph. Related apparatus, systems, techniques and articles are also described. | 10-31-2013 |
20130325874 | Columnwise Storage of Point Data - A database query of point data among two or more axes of a database is received. The database stores point data in distinct integer vectors with a shared dictionary. Thereafter, the dictionary is scanned to determine boundaries for each axis specified by the query. In response, results characterizing data responsive to the query within the determined boundaries for each axis are returned. Related apparatus, systems, techniques and articles are also described. | 12-05-2013 |
20130339082 | CONTEXTUAL INFORMATION RETRIEVAL FOR GROUPWARE INTEGRATION - A groupware application may be modified to include additional functionality enabling data from the groupware application to be exchanged with customer account data in a customer relationship management (CRM) system. After selecting a message or meeting object, a third party email address included in the object may be identified and sent to the CRM system. Account information relating to an account in the CRM system associated with the email address may be retrieved and sent to the groupware application. This additional account information may include marketing leads and/or opportunities, which may be displayed in the groupware application. The user may select a lead and/or an opportunity to associate the user selected object with the user selected lead and/or opportunity. This information may be sent to CRM system. Other information relating to the user selected object may also be sent to the CRM system. | 12-19-2013 |
20130346392 | Columnwise Range K-Nearest Neighbors Search Queries - A range k-nearest neighbor search query of a database is processed by first defining an inner rectangle bounded within a circle around a center point specified by the range k-nearest neighbor search query. Thereafter, a distance to the center point is calculated for each point within the inner rectangle. Query results are returned if k or more points are within the inner rectangle. Otherwise, at least one additional query is executed. Related apparatus, systems, techniques and articles are also described. | 12-26-2013 |
20130346418 | Columnwise Spatial Aggregation - A spatial aggregation query of a database is processed by receiving data specifying a maximum bounded rectangle for point data responsive to the query and specifying one or more grid partitions of the maximum bounded rectangle (in which at least one of the partitions is partially aggregated. Thereafter, for each partition, a number of points responsive to the query in each partition and a center of gravity of the points in each partition is computed. Data characterizing the corresponding computed number of points and center of gravity is then provided (e.g., persisted, loaded, transmitted, displayed, etc.). Related apparatus, systems, techniques and articles are also described. | 12-26-2013 |
20140222828 | Columnwise Storage of Point Data - A database query of point data among two or more axes of a database is received. The database stores point data in distinct integer vectors with a shared dictionary. Thereafter, the dictionary is scanned to determine boundaries for each axis specified by the query. In response, results characterizing data responsive to the query within the determined boundaries for each axis are returned. Related apparatus, systems, techniques and articles are also described. | 08-07-2014 |
20140330807 | Rule-Based Extendable Query Optimizer - A query is received which causes an initial data flow graph that includes a plurality of nodes that are used to execute the query is generated. Thereafter, the initial data flow graph is optimized using a model optimizer that includes an optimizer framework and an application programming interface (API). The optimizer framework provides logic to restructure the initial data flow graph and a rules engine for executing one or more optimization rules. The API allows for registration of new optimization rules to be executed by the rules engine. Execution of the query is then initiated using the optimized data flow graph. Related apparatus, systems, techniques and articles are also described. | 11-06-2014 |
20140372409 | Data Flow Graph Optimization Using Adaptive Rule Chaining - A query is received and an initial data flow graph comprising a plurality of nodes is generated for executing the query. The initial data flow graph is optimized using a model optimizer that accesses at least one of a plurality of patterns to identify a matching pattern and executes at least one optimization rule associated with a matching pattern. Execution of the query is then initiated using the optimized data flow graph. Related apparatus, systems, techniques and articles are also described. | 12-18-2014 |
20150046411 | Managing and Querying Spatial Point Data in Column Stores - A query of spatial data is received by a database comprising a columnar data store storing data in a column-oriented structure. Thereafter, a spatial data set is mapped to physical storage in the database using a space-filling curve. The spatial data set is then compacted and such compacted data can be used to retrieve data from the database that is responsive to the query. Related apparatus, systems, techniques and articles are also described. | 02-12-2015 |
20150265876 | Processing of Geo-Spatial Athletics Sensor Data - Correlated and processed data is received that is derived from a plurality of geo-spatial sensors that respectively generate data characterizing a plurality of sources within a zone of interest. The data includes a series of time-stamped frames for each of the sensors. Subsequently, events of interest are identified, in real-time, based on relative positions of the sources within the zone of interest prior to the data being written to a data storage application. Data can then be provided (e.g., loaded, stored, displayed, transmitted, etc.), in real-time, that characterize the events of interest. Related apparatus, systems, techniques and articles are also described. | 09-24-2015 |
20150268929 | Pre-Processing Of Geo-Spatial Sensor Data - Data is received that is derived from a plurality of geo-spatial sensors that respectively generate data characterizing a plurality of sources within a zone of interest. The data includes series time-stamped frames for each of the sensors and at least one of the sources has two or more associated sensors. The received data can be sorted and processed, for each sensor on a sensor-by-sensor basis, using a sliding window. The sorted and processed data can then be correlated and written into a data storage application. Related apparatus, systems, techniques and articles are also described. | 09-24-2015 |
20150324373 | Querying Spatial Data in Column Stores Using Grid-Order Scans - A query of spatial data is received by a database comprising a columnar data store storing data in a column-oriented structure. Thereafter, a minimal bounding rectangle associated with the query is identified using a grid order scanning technique. The spatial data set corresponding to the received query is then mapped to physical storage in the database using the identified minimal bounding rectangle so that the spatial data set can be retrieved. Related apparatus, systems, techniques and articles are also described. | 11-12-2015 |
20150324399 | Querying Spatial Data in Column Stores Using Tree-Order Scans - A query of spatial data is received by a database comprising a columnar data store storing data in a column-oriented structure. Thereafter, a minimal bounding rectangle associated with the query is identified using a tree-order scanning technique. A spatial data set that corresponds to the received query is then mapped to the physical storage in the database using the identified minimal bounding rectangle. Next, the spatial data set is then retrieved. Related apparatus, systems, techniques and articles are also described. | 11-12-2015 |
20160004735 | Column Store Optimization Using Telescope Columns - A data set of spatial data having a plurality of dimensions and including linestrings can be processing by decomposing each linestring of the plurality of linestrings into a plurality of line segments. Each coordinate dimension appears in at least one line segment of the plurality of line segments can be listed in one of a plurality of dimensional dictionaries that each correspond to a dimension of the plurality of dimensions. A linestring of the plurality of linestrings can be represented as a set of the line segments using the plurality of dimensional dictionaries. | 01-07-2016 |
20160004739 | Column Store Optimization Using Simplex Store - Using index clusters to approximate coordinate values for vertices of compressed simplexes of a spatial data set, valid subspaces can be identified and used to identify other simplexes that may intersect a first simplex. Such approaches can be used for filtering, refining, etc. analysis of intersections between areas, lines, volumes, etc. within spatial data sets. | 01-07-2016 |
20160004762 | Hilbert Curve Partitioning for Parallelization of DBSCAN - DBSCAN clustering analyses can be improved by pre-processing of a data set using a Hilbert curve to intelligently identify the centers for initial partitional analysis by a partitional clustering algorithm such as CLARANS. Partitions output by the partitional clustering algorithm can be process by DBSCAN running in parallel before intermediate cluster results are merged. | 01-07-2016 |
20160004765 | Predictive Cluster Analytics Optimization - Cluster analysis of data points in a data set can be optimized by identification of a preferred cluster analysis method. This identification can be based on indexing the data using a Hilbert curve and determining whether the data points are predominantly in spherical or non-spherical clusters. Methods, systems, and articles of manufacture are described. | 01-07-2016 |
20160005141 | Polygon Simplification - Polygons can be simplified from an original, higher resolution to a simplified, lower resolution such that the simplified versions of the polygons do not introduce errors and also do not render boundaries shared with other polygons invalid. | 01-07-2016 |