Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees

Patent application title: APPARATUS AND METHOD FOR PROVIDING GOAL PREDICTIVE INTERFACE

Inventors:  Yeo Jin Kim (Suwon-Si, KR)
Assignees:  SAMSUNG ELECTRONICS CO., LTD.
IPC8 Class: AG06N502FI
USPC Class: 707802
Class name: Data processing: database and file management or data structures database design database and data structure management
Publication date: 2010-12-16
Patent application number: 20100318576



ce providing apparatus and a method thereof are provided. The predictive goal interface providing apparatus may recognize a current user context by analyzing data sensed from a user environment condition, may analyze user input data received from the user, may analyze a predictive goal based on the recognized current user context, and may provide a predictive goal interface based on the analyzed predictive goal.

Claims:

1. An apparatus for providing a predictive goal interface, the apparatus comprising:a context recognizing unit configured to analyze data sensed from one or more user environment conditions, to analyze user input data received from a user, and to recognize a current user context;a goal predicting unit configured to analyze a predictive goal based on the recognized current user context, to predict a predictive goal of the user, and to provide the predictive goal; andan output unit configured to provide a predictive goal interface and to output predictive goal.

2. The apparatus of claim 1, further comprising:an interface database configured to store and maintain interface data for constructing the predictive goal,wherein the goal predicting unit is further configured to analyze the sensed data and the user input data, and analyzes one or more predictive goals that are retrievable from the stored interface data.

3. The apparatus of claim 1, further comprising:a user model database configured to store and maintain user model data comprising profile information of the user, preference of the user, and user pattern information,wherein the goal predicting unit is further configured to analyze the predictive goal by analyzing at least one of the profile information, the preference information, and the user pattern information.

4. The apparatus of claim 3, wherein the goal predicting unit is further configured to update the user model data based on feedback information of the user, with respect to the analyzed predictive goal.

5. The apparatus of claim 1, wherein:the goal predicting unit is further configured to provide the predictive goal when a confidence level of the predictive goal is greater than or equal to a threshold, the confidence level being based on the recognized current user context; andthe output unit is further configured to output the predictive goal interface comprising the predictive goal corresponding to the predictive goal provided by the goal predicting unit.

6. The apparatus of claim 1, wherein:the goal predicting unit is further configured to predict a menu which the user intends to select in a hierarchical menu structure, based on the recognized current user context; andthe predictive goal interface comprises a hierarchical menu interface to provide the predictive goal list.

7. The apparatus of claim 1, wherein: the goal predicting unit is further configured to predict the predictive goal comprising a result of a combination of commands capable of being combined, based on the recognized current user context; andthe predictive goal interface comprises a result interface to provide the result of the combination of commands.

8. The apparatus of claim 1, wherein the sensed data comprises hardware data collected through at least one of a location identification sensor, a proximity identification sensor, a radio frequency identification (RFID) tag sensor, a motion sensor, a sound sensor, a vision sensor, a touch sensor, a temperature sensor, a humidity sensor, a light sensor, a pressure sensor, a gravity sensor, an acceleration sensor, and a bio-sensor.

9. The apparatus of claim 1, wherein the sensed data comprises software data collected through at least one of an electronic calendar application, a scheduler application, an e-mail management application, a message management application, a communication application, a social network application, and a web site management application.

10. The apparatus of claim 1, wherein the user input data is data received through at least one of a text input means, a graphic user interface (GUI), and a touch screen.

11. The apparatus of claim 1, wherein the user input data is data received through an input means for at least one of voice recognition, facial expression recognition, emotion recognition, gesture recognition, motion recognition, posture recognition, and multimodal recognition.

12. The apparatus of claim 1, further comprising:a knowledge model database configured to store and maintain a knowledge model with respect to at least one domain knowledge; andan intent model database configured to store and maintain an intent model that contains the user intent to use the interface.

13. The apparatus of claim 12, wherein the user intents are recognizable from the user context using at least one of search, logical inference, and pattern recognition.

14. The apparatus of claim 13, wherein the goal predicting unit is further configured to predict the user goal using the knowledge model or the intent model, based on the recognized current user context.

15. A method of providing a predictive goal interface, the method comprising:recognizing a current user context by analyzing data sensed from a user environment condition and analyzing user input data received from the user;analyzing a predictive goal based on the recognized current user context; andproviding a predictive goal interface comprising the analyzed predictive goal.

16. The method of claim 15, wherein the analyzing of the predictive goal analyzes the sensed data and the user input data, and analyzes the predictive goal that is retrievable from interface data stored in an interface database.

17. The method of claim 15, wherein the predicting goal analyzes at least one of profile information of the user, preference of the user, and user pattern information, which are stored in the user model database.

18. The method of claim 15, wherein the providing the predictive goal comprises providing the predictive goal when a confidence level of the predictive goal is greater than or equal to a threshold, the confidence level being based on the recognized current user context, and the method further comprises outputting the predictive goal interface comprising the provided predictive goal.

19. A non-transitory computer readable storage medium storing a program to implement a method of providing a predictive goal interface, comprising instructions to cause a computer to:recognize a current user context by analyzing data sensed from a user environment condition and analyzing user input data received from the user;analyze a predictive goal based on the recognized current user context; andprovide a predictive goal interface comprising the analyzed predictive goal.

Description:

CROSS-REFERENCE TO RELATED APPLICATION

[0001]This application claims the benefit under 35 U.S.C. §119(a) of a Korean Patent Application No. 10-2009-0051675, filed on Jun. 10, 2009, in the Korean Intellectual Property Office, the entire disclosure of which is incorporated herein by reference for all purposes.

BACKGROUND

[0002]1. Field

[0003]The following description relates to an apparatus and a method of providing a predictive goal interface, and more particularly, to an apparatus and a method of predicting a goal desired by a user and providing a predictive goal interface.

[0004]2. Description of Related Art

[0005]As information communication technologies have developed, there has been a trend towards the merging of various functions into a single device. As various functions are added to a device, the number of buttons increases in the device, a complexity of a structure of a user interface increases due to a more complex menu structure, and the time expended searching through a hierarchical menu to get to a final goal or desired menu choice, increases.

[0006]Generally, user interfaces are static, that is, they are designed ahead of time and added to a device before reaching the end user. Thus, designers typically must anticipate, in advance, the needs of the interface user. If it is desired to add a new interface element to the device, significant redesign must take place in either software, hardware, or a combination thereof, to implement the reconfigured interface or the new interface.

[0007]In addition, there is difficulty in predicting a result occurring based on a combination of selections with respect to commands for various functions. Accordingly, it is difficult to predict that the user will fail to get to a final goal until the user arrives at an end node, even when the user takes a wrong route.

SUMMARY

[0008]In one general aspect, there is provide an apparatus of providing a predictive goal interface, the apparatus including a context recognizing unit to analyze data sensed from one or more user environment conditions, to analyze user input data received from a user, and to recognize a current user context, a goal predicting unit to analyze a predictive goal based on the recognized current user context, to predict a predictive goal of the user, and to provide the predictive goal, and an output unit to provide a predictive goal interface and to output predictive goal.

[0009]The apparatus may further including an interface database to store and maintain interface data for constructing the predictive goal, wherein the goal predicting unit analyzes the sensed data and the user input data, and analyzes one or more predictive goals that are retrievable from the stored interface data.

[0010]The apparatus may further include a user model database to store and maintain user model data including profile information of the user, preference of the user, and user pattern information, wherein the goal predicting unit analyzes the predictive goal by analyzing at least one of the profile information, the preference information, and the user pattern information.

[0011]The goal predicting unit may update the user model data based on feedback information of the user, with respect to the analyzed predictive goal.

[0012]The goal predicting unit may provide the predictive goal when a confidence level of the predictive goal is greater than or equal to a threshold, the confidence level being based on the recognized current user context, and the output unit may output the predictive goal interface including the predictive goal corresponding to the predictive goal provided by the goal predicting unit.

[0013]The goal predicting unit may predict a menu which the user intends to select in a hierarchical menu structure, based on the recognized current user context, and the predictive goal interface may include a hierarchical menu interface to provide the predictive goal list.

[0014]The goal predicting unit may predict the predictive goal including a result of a combination of commands capable of being combined, based on the recognized current user context, and the predictive goal interface includes a result interface to provide the result of the combination of commands.

[0015]The sensed data may include hardware data collected through at least one of a location identification sensor, a proximity identification sensor, a radio frequency identification (RFID) tag sensor, a motion sensor, a sound sensor, a vision sensor, a touch sensor, a temperature sensor, a humidity sensor, a light sensor, a pressure sensor, a gravity sensor, an acceleration sensor, and a bio-sensor.

[0016]The sensed data may include software data collected through at least one of an electronic calendar application, a scheduler application, an e-mail management application, a message management application, a communication application, a social network application, and a web site management application.

[0017]The user input data may be data received through at least one of a text input means, a graphic user interface (GUI), and a touch screen.

[0018]The user input data may be data received through an input means for at least one of voice recognition, facial expression recognition, emotion recognition, gesture recognition, motion recognition, posture recognition, and multimodal recognition.

[0019]The apparatus may further include a knowledge model database to store and maintain a knowledge model with respect to at least one domain knowledge, and an intent model database to store and maintain an intent model that contains the user intent to use the interface.

[0020]The user intents may be recognizable from the user context using at least one of search, logical inference, and pattern recognition.

[0021]The goal predicting unit may predict the user goal using the knowledge model or the intent model, based on the recognized current user context.

[0022]In another aspect, provided is a method of providing a predictive goal interface, the method including recognizing a current user context by analyzing data sensed from a user environment condition and analyzing user input data received from the user, analyzing a predictive goal based on the recognized current user context, and providing a predictive goal interface including the analyzed predictive goal.

[0023]The analyzing of the predictive goal may include analyzing the sensed data and the user input data, and analyzing the predictive goal that is retrievable from interface data stored in an interface database.

[0024]The predicting goal may analyze at least one of profile information of the user, preference of the user, and user pattern information, which are stored in the user model database.

[0025]The providing the predictive goal may further include providing the predictive goal when a confidence level of the predictive goal is greater than or equal to a threshold, the confidence level being based on the recognized current user context, and the method may further include outputting the predictive goal interface including the provided predictive goal.

[0026]In another aspect, provided is a computer readable storage medium storing a program to implement a method of providing a predictive goal interface, including instructions to cause a computer to recognize a current user context by analyzing data sensed from a user environment condition and analyzing user input data received from the user, analyze a predictive goal based on the recognized current user context, and provide a predictive goal interface including the analyzed predictive goal.

[0027]Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0028]FIG. 1 is a diagram illustrating an example predictive goal interface providing apparatus.

[0029]FIG. 2 is a diagram illustrating an example process of providing a predictive goal interface through a predictive goal interface providing apparatus.

[0030]FIG. 3 is a diagram illustrating another example process of providing a predictive goal interface through a predictive goal interface providing apparatus.

[0031]FIG. 4 is a diagram illustrating another example process of providing a predictive goal interface through a predictive goal interface providing apparatus.

[0032]FIG. 5 is a flowchart illustrating an example method of providing a predictive goal interface.

[0033]Throughout the drawings and the detailed description, unless otherwise described, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The relative size and depiction of these elements may be exaggerated for clarity, illustration, and convenience.

DETAILED DESCRIPTION

[0034]The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. Accordingly, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be suggested to those of ordinary skill in the art. Also, descriptions of well-known functions and constructions may be omitted for increased clarity and conciseness.

[0035]FIG. 1 illustrates an example predictive goal interface providing apparatus 100.

[0036]Referring to FIG. 1, the predictive goal interface providing apparatus 100 includes a context recognizing unit 110, a goal predicting unit 120, and an output unit 130.

[0037]The context recognizing unit 110 recognizes a current user context by analyzing data sensed from a user environment condition and/or analyzing user input data received from a user.

[0038]The sensed data may include hardware data collected through at least one of a location identification sensor, a proximity identification sensor, a radio frequency identification (RFID) tag identification sensor, a motion sensor, a sound sensor, a vision sensor, a touch sensor, a temperature sensor, a humidity sensor, a light sensor, a pressure sensor, a gravity sensor, an acceleration sensor, a bio-sensor, and the like. As described, the sensed data may be data collected from a physical environment.

[0039]The sensed data may also include software data collected through at least one of an electronic calendar application, a scheduler application, an e-mail management application, a message management application, a communication application, a social network application, a web site management application, and the like.

[0040]The user input data may be data received through at least one of a text input means, a graphic user interface (GUI), a touch screen, and the like. The user input data may be received through an input means for voice recognition, facial expression recognition, emotion recognition, gesture recognition, motion recognition, posture recognition, multimodal recognition, and the like.

[0041]The goal predicting unit 120 analyzes a predictive goal based on the recognized current user context. For example, the goal predicting unit 120 may analyze the sensed data and/or the user input data and predict a goal.

[0042]For example, the goal predicting unit 120 may predict the menu which the user intends to select in a hierarchical menu structure, based on the recognized current user context. The predictive goal interface may include a hierarchical menu interface with respect to the predictive goal list.

[0043]Also, the goal predicting unit 120 may analyze a predictive goal including a result of a combination of commands capable of being combined, based on the recognized current user context. The predictive goal interface may include a result interface corresponding to the result of the combination of commands.

[0044]The output unit 130 provides the predictive goal interface, based on the analyzed predictive goal.

[0045]The goal predicting unit 120 may output the predictive goal. For example, the goal predicting unit 120 may output the goal when a confidence level of the predictive goal is greater than a threshold level or equal to a threshold level. The output unit 130 may provide the predictive goal interface corresponding to the outputted predictive goal. For example, the output unit may provide a display of the predictive goal interface to a user.

[0046]The predictive goal interface providing apparatus 100 may include an interface database 150 and/or a user model database 160.

[0047]The interface database 150 may store and maintain interface data for constructing the predictive goal and the predictive goal interface. For example, the interface database 150 may include one or more predictive goals that may be retrieved by the goal predicting unit 120, and compared to the sensed data and/or the user input data. The user model database 160 may store and maintain user model data including a profile information of the user, preference of the user, and/or user pattern information. The sensed data and/or the user input data may be compared to the data stored in the interface database 150 to determine a predictive goal of a user.

[0048]The interface data may be data with respect to contents or a menu that are an objective goal of the user, and the user model is a model used for providing a result of a predictive goal individualized for the user. The interface data may include data recorded after constructing a user's individual information or data extracted from data accumulated while the user uses a corresponding device.

[0049]In some embodiments, the interface database 150 and/or the user model database 160 may not be included in the predictive goal interface providing apparatus 100. In some embodiments, the interface database 150 and/or the user mode database 160 may be included in a system existing externally from the predictive goal interface providing apparatus 100.

[0050]Also, the goal predicting unit 120 may analyze the sensed data and/or the user input data, and may analyze a predictive goal that is retrievable from the interface data stored in the interface database 150. The goal predicting unit 120 may analyze at least one of the profile information, the preference information, and/or the user pattern information included in the user model data stored in the user model database 160. The goal predicting unit 120 may update the user model data based on feedback information of the user with respect to the analyzed predictive goal.

[0051]The predictive goal interface providing apparatus 100 may include a knowledge database 170 and/or an intent model database 180.

[0052]The knowledge database 170 may store and maintain a knowledge model with respect to at least one domain knowledge, and the intent model database 180 may store and maintain an intent model containing the user's intentions to use the interface. The intentions may be recognizable from the user context using at least one of, for example, search, logical inference, pattern recognition, and the like.

[0053]The goal predicting unit 120 may analyze the predictive goal through the knowledge model or the intent model, based on the recognized current user context.

[0054]FIG. 2 illustrates an exemplary process of providing a predictive goal interface through a predictive goal interface providing apparatus.

[0055]In the conventional art, if a user intends to change, for example, a background image of a portable terminal device into a picture just taken, for example, picture 1, the user may change the background image through a process of selecting the menu option → display option → background image in standby mode option → selecting a picture (picture 1) based on a conventional menu providing scheme.

[0056]According to an exemplary embodiment, the predictive goal interface providing apparatus 100 may analyze a predictive goal based on a recognized current user context or intent of the user, and the predictive goal interface providing apparatus 100 may provide the predictive goal interface based on the analyzed predictive goal.

[0057]For example, the predictive goal interface providing apparatus 100 may analyze the predictive goal including a predictive goal list with respect to a hierarchical menu structure, based on the recognized current user context, and may provide the predictive goal interface based on the analyzed predictive goal.

[0058]As illustrated in FIG. 2, the predictive goal interface may include a hierarchical menu interface with respect to the predictive goal list.

[0059]The predictive goal interface providing apparatus 100 may recognize the current user context from data sensed from a user environment condition where the user takes a picture and from user input data, for an example, a process of menu → display → etc., which is inputted from the user for selecting a menu.

[0060]For example, based upon the sensed data and/or the user input data, the predictive goal interface providing apparatus 100 may analyze a goal, G1, to change the background image into the picture 1. The predictive goal interface providing apparatus 100 may analyze a predictive goal, G2, to change a font in the background image. The predictive goal interface providing apparatus 100 may provide the predictive goal interface including a predictive goal list being capable of changing of the background image in the standby mode into the picture 1 and/or changing of the font in the background image.

[0061]The user may be provided with a goal list that is predicted to be a user's goal through the predictive goal interface providing apparatus 100, according to example embodiments, as the user selects a menu in a hierarchical menu.

[0062]Also, the predictive goal interface providing apparatus 100 may predict and provide a probable goal of the user at a current point in time, thereby shortening a hierarchical selection process of the user.

[0063]FIG. 3 illustrates another exemplary process of providing a predictive goal interface through a predictive goal interface providing apparatus.

[0064]The goal predictive interface providing apparatus 100, according to an exemplary embodiment, may be applicable when various results are derived according to a dynamic combination of selections.

[0065]The predictive goal interface providing apparatus 100 may analyze a probable predictive goal from a recognized current user context or user intent, and the predictive goal interface providing apparatus 100 may provide the predictive goal interface based on the analyzed predictive goal.

[0066]Also, depending on embodiments, the predictive goal interface providing apparatus 100 may analyze a predictive goal including a result of a combination of commands capable of being combined based on the recognized current user context. In this case, the predictive goal interface may include a result interface corresponding to the combination result.

[0067]The predictive goal interface apparatus of FIG. 3 may be applicable to an apparatus, for example, a robot where various combination results are generated according to a combination of commands selected by the user. As described for exemplary purposes, FIG. 3 provides an example of the predictive goal interface apparatus that is implemented with a robot. However, the predictive goal interface apparatus is not limited to a robot, and may be used for any desired purpose.

[0068]Referring to FIG. 3, a user may desire to rotate a leg of a robot to move an object behind the robot. The recognized current user context where a robot sits down, is context 1. The predictive goal interface providing apparatus 100 may analyze a predictive goal, for example, `bend leg`, `bend arm`, and `rotate arm`, that is a result of a combination of commands capable of being combined based on the context 1. The predictive goal interface providing apparatus 100 may provide a predictive goal interface including a result interface (1.bend leg and 2.bend arm/rotate arm) corresponding to the combination result.

[0069]A user may recognize that `bend leg` is not available from the predictive goal interface based on the context 1, and provided through the predictive goal interface providing apparatus 100. The user may change the context 1 into context 2. The predictive goal interface providing apparatus 100 may analyze a predictive goal, for example, `bend leg`, `rotate leg`, `walk, bend arm`, and `rotate arm`, that is a result of a combination of commands capable of being combined based on the context 2. The predictive goal interface providing apparatus 100 may provide a predictive goal interface including a result interface corresponding to the combination result (bend leg/rotate leg/walk and 2.bend arm/rotate arm).

[0070]A user may select the `leg` of the robot as a part to be operated, for example, as illustrated in context 3. The predictive goal interface providing apparatus 100 may analyze a predictive goal, for example, `bend leg`, `rotate leg`, and `walk`, which is a result of a combination of commands capable of being combined based on the context 3. The predictive goal interface providing apparatus 100 may provide a predictive goal interface including a result interface corresponding to the combination result (1.bend leg/rotate leg/walk).

[0071]The predictive goal interface providing apparatus 100 may predict a result of a series of selections selected by the user and may provide the predicted results. Accordingly, the predictive goal interface providing apparatus 100 may previously provide the predicted result at a current point in time, thereby performing as a guide. The predictive goal interface providing apparatus 100 may enable the user to make a selection, and display a narrowed range of the predictive goal, by recognizing a current context and/or a user intent.

[0072]FIG. 4 illustrates another exemplary process of providing a predictive goal interface through a predictive goal interface providing apparatus.

[0073]The predictive goal interface providing apparatus 100, according to an exemplary embodiment, may analyze a probable predictive goal from a recognized current user context or user intent, and may provide a predictive goal interface based on the analyzed predictive goal.

[0074]Referring to FIG. 4, when a user selects the menu for contents, for example, Harry Potter® 6, manufactured by Time Warner Entertainment Company, L.P., New York, N.Y., the predictive goal interface providing apparatus 100 may recognize the current user context that is analyzed based on the user input data.

[0075]Depending on embodiments, the predictive goal interface providing apparatus 100 may analyze a predictive goal (1. watching Harry Potter® 6) based on the recognized current user context, and may provide a predictive goal interface (2. movie, 3. music, and 4. e-book) corresponding to contents or a service that are connectable based on the analyzed predictive goal (1. watching Harry Potter® 6).

[0076]The predictive goal interface providing apparatus 100 may output the predictive goal or may provide the predictive goal interface, when a confidence level of the predictive goal (1. watching Harry Potter® 6) is greater than or equal to a threshold level. The predictive goal interface providing apparatus 100 may not output the predictive goal or provide the predictive goal interface, when the confidence level of the predictive goal is below a threshold level.

[0077]The predictive goal interface providing apparatus 100, according to an exemplary embodiment, may recognize a user context and user intent, and may predict and provide a detailed goal to a user.

[0078]FIG. 5 is a flowchart illustrating an exemplary method of providing a predictive goal interface.

[0079]Referring to FIG. 5, the exemplary predictive goal interface providing method may recognize a current user context by analyzing data sensed from a user environment condition and analyzing user input data received from the user in 510.

[0080]The predictive goal interface providing method may analyze a predictive goal based on the recognized current user context in 520.

[0081]A predictive goal may be retrieved from interface data stored in an interface database. The predictive goal may be determined by analyzing the sensed data and the user input data in 520.

[0082]The predictive goal may be analyzed by analyzing at least one of a profile information of the user, a preference of the user, and a user pattern information included in user model data, stored in a user model database, in 520.

[0083]The predictive goal interface providing method may provide a predictive goal interface based on the analyzed predictive goal, in 530.

[0084]The predictive goal may be outputted when it is determined that a confidence level of the predictive goal based on the recognized current user context is greater than or equal to a threshold level, in 520. The predictive goal interface corresponding to the outputted predictive goal may then be provided in 530.

[0085]The method described above, including the predictive goal interface providing method according to the above-described example embodiments, may be recorded, stored, or fixed in one or more computer-readable storage media that includes program instructions to be implemented by a computer to cause a processor to execute or perform the program instructions. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. Examples of computer-readable storage media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media, such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa. In addition, a computer-readable storage medium may be distributed among computer systems connected through a network and computer-readable codes or program instructions may be stored and executed in a decentralized manner.

[0086]A number of exemplary embodiments have been described above. Nevertheless, it will be understood that various modifications may be made. For example, suitable results may be achieved if the described techniques are performed in a different order and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents. Accordingly, other implementations are within the scope of the following claims.



Patent applications by Yeo Jin Kim, Suwon-Si KR

Patent applications by SAMSUNG ELECTRONICS CO., LTD.


User Contributions:

Comment about this patent or add new information about this topic:

CAPTCHA
People who visited this patent also read:
Patent application numberTitle
20130067060Wake Pattern Management
20130067059Keep Alive Management
20130067058PROVIDING EXTERNAL ACCESS TO SERVICE VERSIONS VIA A BUNDLE FRAMEWORK
20130067057SUBSCRIPTION HANDLING FOR THE IP MULTIMEDIA SUBSYSTEM
20130067056PROVIDING COMMUNICATION PATH INFORMATION IN A HYBRID COMMUNICATION NETWORK
Images included with this patent application:
APPARATUS AND METHOD FOR PROVIDING GOAL PREDICTIVE INTERFACE diagram and imageAPPARATUS AND METHOD FOR PROVIDING GOAL PREDICTIVE INTERFACE diagram and image
APPARATUS AND METHOD FOR PROVIDING GOAL PREDICTIVE INTERFACE diagram and imageAPPARATUS AND METHOD FOR PROVIDING GOAL PREDICTIVE INTERFACE diagram and image
APPARATUS AND METHOD FOR PROVIDING GOAL PREDICTIVE INTERFACE diagram and image
Similar patent applications:
DateTitle
2009-09-17Browser use of directory listing for predictive type-ahead
2009-09-17Browser use of directory listing for predictive type-ahead
2009-12-17System and method for providing a guided user interface to process waymark records
2009-03-05Method and system for providing medication level determination
2009-03-19System and method for providing a social network aware input dictionary
New patent applications in this class:
DateTitle
2022-05-05Cognitively rendered event timeline display
2016-04-14Integrating customized user experiences
2016-04-14Collaborative item database
2016-03-24Techniques for maintaining column vectors of relational data within volatile memory
2016-03-24Managing record location lookup caching in a relational database
New patent applications from these inventors:
DateTitle
2015-05-14Service providing device, service providing system including user profile server, and service providing method for service providing device
2014-05-29Device and portable storage device which are capable of transferring rights object, and a method of transferring rights object
2014-05-29Device and portable storage device which are capable of transferring rights object, and a method of transferring rights object
2011-10-13Method and apparatus for displaying power consumption
2011-09-08Apparatus and method for displaying user interface for transmitting contents
Top Inventors for class "Data processing: database and file management or data structures"
RankInventor's name
1International Business Machines Corporation
2International Business Machines Corporation
3John M. Santosuosso
4Robert R. Friedlander
5James R. Kraemer
Website © 2025 Advameg, Inc.