Patent application number | Description | Published |
20130121540 | Facial Recognition Using Social Networking Information - In particular embodiments, one or more images associated with a primary user are received. The image(s) may comprise single images, a series of related images, or video frames. In each image, one or more faces may be detected and/or tracked. For each face, one or more candidates are selected who may be identified with the face. Each candidate may be connected to the primary user within a social network. A candidate score for each candidate associated with a detected face. Finally, the winning candidate is determined, and a suggestion to identify the detected face as being the winning candidate is presented. Some embodiments may operate upon video clips as the video is captured by a mobile device. Some embodiments may operate upon series of images as they are uploaded to or viewed on a website. | 05-16-2013 |
20140095419 | Enhanced Predictive Input Utilizing a Typeahead Process - Particular embodiments may retrieve information associated with one or more nodes of a social graph from one or more data stores. A node may comprise a user node or a concept node. Each node may be connected by edges to other nodes of a social graph. A first user may be associated with a first user node of the social graph. Particular embodiments may detect that the first user is entering an input term. Predictive typeahead results may be provided as the first user enters the input term. The predictive typeahead results may be based on the input term. Each predictive typeahead result may include at least one image. Each predictive typeahead result may correspond to at least one node of the social graph. | 04-03-2014 |
20140096059 | Systems and Methods for a User-Adaptive Keyboard - In one embodiment, a method includes detecting one or more user interactions, associated with a user of a computing device, each interaction occurring at a region associated with an input value, and determining, for at least one user interaction, that the at least one user intended to provide a different input value. Adaptation information is generated for the at least one user based on the at least one user interaction. The adaptation information is stored for the at least one user. A user interaction is detected at a region. The user's intended input value is determined based on the user interaction and the adaptation information. | 04-03-2014 |
20140108935 | Voice Commands for Online Social Networking Systems - In one embodiment, a method includes accessing a social graph that includes a plurality of nodes and edges, receiving from a first user a voice message comprising one or more commands, receiving location information associated with the first user, identifying edges and nodes in the social graph based on the location information, where each of the identified edges and nodes corresponds to at least one of the commands of the voice message, and generating new nodes or edges in the social graph based on the identified nodes or identified edges. | 04-17-2014 |
20140143665 | Generating a Social Glossary - Particular embodiments determine that a textual term is not associated with a known meaning. The textual term may be related to one or more users of the social-networking system. A determination is made as to whether the textual term should be added to a glossary. If so, then the textual term is added to the glossary. Information related to one or more textual terms in the glossary is provided to enhance auto-correction, provide predictive text input suggestions, or augment social graph data. Particular embodiments discover new textual terms by mining information, wherein the information was received from one or more users of the social-networking system, was generated for one or more users of the social-networking system, is marked as being associated with one or more users of the social-networking system, or includes an identifier for each of one or more users of the social-networking system. | 05-22-2014 |
20140152577 | Systems and Methods for a Symbol-Adaptable Keyboard - In one embodiment, a method includes detecting a communication session between a first user and one or more second users. The method also includes determining a social context of the communication session, and determining based at least in part on the social context a set of symbols for communication by the first user in the communication session with the second users. The method further includes providing for display to the first user a set of keys corresponding to the set of symbols. The keys indicate symbols for input by the first user in the communication session. | 06-05-2014 |
20140156262 | Systems and Methods for Character String Auto-Suggestion Based on Degree of Difficulty - In one embodiment, a method includes receiving one or more characters of a character string as a user enters the character string into a graphical user interface (GUI) of a computing device. The method also includes determining a degree of difficulty of the user entering the character string into the GUI of the computing device. The method further includes, if the degree of difficulty is at least approximately equal to or exceeds a pre-determined threshold, providing for display to the user an auto-suggestion for completing the character string for the user. | 06-05-2014 |
20140156762 | Replacing Typed Emoticon with User Photo - In one embodiment, a computing device receives input from a user participating in a message session. The computing device detects an emoticon in the received input and identifies an image corresponding to the emoticon. The computing device accesses the image corresponding to the emoticon and replaces the emoticon with the image in the message session. | 06-05-2014 |
20140157153 | Select User Avatar on Detected Emotion - In one embodiment, a computing device receives input from a user participating in a message session. The computing device determines an emotion state of the user based on contents of the received input and identifies an avatar image corresponding to the determined emotion state. The computing device accesses the identified avatar image corresponding to the determined emotion state and displays the identified avatar image. | 06-05-2014 |
20140157179 | Systems and Methods for Selecting a Symbol Input by a User - In one embodiment, a method includes providing for display a first set of touch-screen keys corresponding to a first set of symbols. The method also includes providing for display at least partially underneath the first set of touch-screen keys a second set of touch-screen keys corresponding to a second set of symbols. At least a portion of the second set of touch-screen keys are visible through the first set of keys. The method further includes detecting a touch gesture by the user over the first and second sets of keys intending to input a symbol. The method further includes determining a context of the input by the user. The method further includes selecting based at least in part on the context a symbol in the first set of symbols or a symbol in the second set of symbols as the symbol that the user intended to input. | 06-05-2014 |
20140160029 | Systems and Methods for a Trackpad Within a Keyboard - In one embodiment, a method includes providing for display to a user a set of keys within a region of a touch-screen user interface, each key being responsive to a keystroke touch-gesture within an area of the key. The method also includes receiving a pre-defined user input other than a keystroke touch-gesture within an area of a key. The method further includes, in response to the pre-defined user input, providing within the region of the touch-screen user interface a trackpad in place of at least a portion of the set of keys. | 06-12-2014 |
20140208258 | Predictive Input Using Custom Dictionaries - In one embodiment, a method includes detecting that a first user is entering a text input at an input region of a computing device, wherein the input region includes multiple subregions and each subregion is associated with at least one character of a plurality of characters. The method also includes determining, for each character as the first user enters the text input, a probability that the character is next in the text input. The method further includes determining a size of each subregion based on the determined probability of the character associated with the subregion. | 07-24-2014 |