Entries |
Document | Title | Date |
20080256452 | CONTROL OF AN OBJECT IN A VIRTUAL REPRESENTATION BY AN AUDIO-ONLY DEVICE - Control of objects in a virtual representation includes receiving signals from audio-only devices, and controlling states of the objects in response to the signals. | 10-16-2008 |
20090013255 | Method and System for Supporting Graphical User Interfaces - A user interface for a customer service application can be created and supported such that the user of the customer service application can utilize that application through a variety of modalities. Further, an interface can be supported in such a manner that certain tasks to be performed using that interface are streamlined, which may take place in combination with the enabling of multi-modality interaction. | 01-08-2009 |
20090044122 | METHOD AND SYSTEM TO PROCESS DIGITAL AUDIO DATA - A method to process digital audio data displays the digital audio data in one or more tracks along a time line in a graphical interface of a computer system and defines arrange regions within the time line of the digital audio data as objects for manipulation. Tracks within a selected arrange region are processed as an entity in accordance with commands received through the graphical user interface. | 02-12-2009 |
20090100340 | ASSOCIATIVE INTERFACE FOR PERSONALIZING VOICE DATA ACCESS - The claimed subject matter according to one aspect provides systems and/or methods that effectuate user development, customization, or utilization of dynamically configurable dialogue flow systems. The system can include devices and components that employ data associated with a user to retrieve navigation panes unique with respect to the user, scans the navigation panes and identifies adjustable attributes, utilizes the adjustable attributes to generate voice prompts communicated to the user via handheld devices, the user in reply to the voice prompts utters personalized responses associated with the voice prompts, and based at least on the personalized responses initiates actions associated with the adjustable attributes. | 04-16-2009 |
20090125813 | METHOD AND SYSTEM FOR PROCESSING MULTIPLE DIALOG SESSIONS IN PARALLEL - A dialog system and method may generate and maintain in parallel multiple dialog sessions, determine to which dialog session a user speech input applies, selectively provide control to one of the dialog sessions, at any one time, to output data to the user, synchronize multiple dialog sessions, and support user interruptions at any time during the dialog sessions. | 05-14-2009 |
20090172546 | SEARCH-BASED DYNAMIC VOICE ACTIVATION - A method, apparatus, and electronic device for voice navigation are disclosed. A voice input mechanism | 07-02-2009 |
20090199101 | SYSTEMS AND METHODS FOR INPUTTING GRAPHICAL DATA INTO A GRAPHICAL INPUT FIELD | 08-06-2009 |
20090210795 | VOICE ACTIVATED SYSTEM AND METHOD TO ENABLE A COMPUTER USER WORKING IN A FIRST GRAPHICAL APPLICATION WINDOW TO DISPLAY AND CONTROL ON-SCREEN HELP, INTERNET, AND OTHER INFORMATION CONTENT IN A SECOND GRAPHICAL APPLICATION WINDOW - A system is disclosed for displaying a second window of a second application while a first window of a first application has input focus in a windowed computing environment having a voice recognition engine. The system comprises a retriever for launching the second application, a user command receiver for receiving commands from the voice recognition engine, and an application manager. The application manager responds to a command from the user command receiver by invoking the retriever to launch the second application and display the second window while the first window maintains substantially uninterrupted input focus. | 08-20-2009 |
20090307595 | System and method for associating semantically parsed verbal communications with gestures - A metaverse system and method for dynamically enacting syntax-based gestures in association with a metaverse application. The metaverse system includes a metaverse server and a semantic gesturing engine. The metaverse server executes a metaverse application. The metaverse application allows metaverse application allows a user on the client computer to enter a metaverse virtual world as an avatar via a metaverse client viewer. The semantic gesturing engine is coupled to the metaverse server and identifies a verbal communication from the avatar within the metaverse application, dynamically selects a gesture associated with the verbal communication in response to a determination that an association exists between the verbal communication and the gesture, and dynamically executes the selected gesture to cause the avatar to enact the selected gesture in conjunction with conveying the verbal communication. | 12-10-2009 |
20100031150 | RAISING THE VISIBILITY OF A VOICE-ACTIVATED USER INTERFACE - A system is configured to enable a user to assert voice-activated commands. When the user issues a non-ambiguous command, the system activates a corresponding control. The area of activity on the user interface is visually highlighted to emphasize to the user that what they spoke caused an action. In one specific embodiment, the highlighting involves floating text the user uttered to a visible user interface component. | 02-04-2010 |
20100031151 | Enabling speech within a multimodal program using markup - A method for speech enabling an application can include the step of specifying a speech input within a speech-enabled markup. The speech-enabled markup can also specify an application operation that is to be executed responsive to the detection of the speech input. After the speech input has been defined within the speech-enabled markup, the application can be instantiated. The specified speech input can then be detected and the application operation can be responsively executed in accordance with the specified speech-enabled markup. | 02-04-2010 |
20100100821 | WINDOW DETECTION SYSTEM AND METHOD FOR OPERATING THE SAME - A window detection system is disclosed. The window detection system includes a CPU, a window detection module and a voice identification module. The window detection module has a virtual assistant for receiving signals from an external voice inputting unit and to process thereby. The window detection module is connected to a preset display for synchronized detection of the present work window among a plurality of windows in the display. When the voice inputting unit receives the user's voice command, the virtual assistant of the window detection system starts to process and judges the execution command, and then matches the voice command with the signal of the present work window in the display detected by the window detection module. The voice identification module searches through the CPU to retrieve the information from an internal database of the host. Thus matching with the present work window for displaying can be easily achieved via quick operation and accordingly upgrade the identification efficiency. | 04-22-2010 |
20100180202 | User Interfaces for Electronic Devices - A mobile telephone ( | 07-15-2010 |
20100275122 | CLICK-THROUGH CONTROLLER FOR MOBILE INTERACTION - A “Click-Through Controller” uses various mobile electronic devices (e.g., cell phones, media players, digital cameras, etc.) to provide real-time interaction with content (e.g., maps, places, images, documents, etc.) displayed on the device's screen via selection of one or more “overlay menu items” displayed on top of that content. Navigation through displayed contents is provided by recognizing 2D and/or 3D device motions and rotations. This allows users to navigate through the displayed contents by simply moving the mobile device. Overlay menu items activate predefined or user-defined functions to interact with the content that is directly below the selected overlay menu item on the display. In various embodiments, there is a spatial correspondence between the overlay menu items and buttons or keys of the mobile device (e.g., a cell phone dial pad or the like) such that overlay menu items are directly activated by selection of one or more corresponding buttons. | 10-28-2010 |
20100313133 | AUDIO AND POSITION CONTROL OF USER INTERFACE - A method is provided for using a wireless controller to interact with a user interface presented on a display. The method includes receiving an audio signal and a position signal from the wireless controller. The audio signal is based on an audio input applied to the wireless controller, while the position signal is based on a position input applied to the wireless controller. The method includes selecting a user interface item displayed on the display, based on the audio signal and the position signal. One or more position signals from the wireless controller may also be received and processed to cause navigation of the user interface to highlight a user interface item for selection. | 12-09-2010 |
20110016397 | POSITIONING A VIRTUAL SOUND CAPTURING DEVICE IN A THREE DIMENSIONAL INTERFACE - A method, system, and computer-readable product for positioning a virtual sound capturing device in a graphical user interface (GUI) are disclosed. The method includes displaying a virtual sound capturing device in relation to a virtual sound producing device in a three dimensional interface and in a two dimensional graphical map. Additionally, the method includes adjusting the display of the virtual sound capturing device in relation to the virtual sound producing device in both the three dimensional interface and the two dimensional graphical map in response to commands received from an input device. | 01-20-2011 |
20110035671 | IMAGE PROCESSING DEVICE, METHOD OF SHARING VOICE OPERATION HISTORY, AND METHOD OF SHARING OPERATION ITEM DISTINGUISH TABLE - The present invention is intended to share information as to voice operation in an image processing device with a voice operation function with another image processing device, thereby improving operability for using another one. An image processing device allowed to be connected to a network comprising: an operational panel for displaying a menu screen and receiving a manual operation to the menu screen; a speech input part for inputting speech; an operation item specifying part for specifying an operation item to be a target of operation based on a voice word; a voice operation control part for executing a processing corresponding to the specified operation item; a history information generation part for generating a voice operation history information in which the voice word and the specified operation item are associated; and a transmission part for transmitting the generated voice operation history information to another image processing device through the network. | 02-10-2011 |
20110083075 | EMOTIVE ADVISORY SYSTEM ACOUSTIC ENVIRONMENT - An emotive advisory system for use by one or more occupants of an automotive vehicle includes a directional speaker array, and a computer. The computer is configured to determine an audio direction, and output data representing an avatar for visual display. The computer is further configured to output data representing a spoken statement for the avatar for audio play from the speaker array such that the audio from the speaker array is directed in the determined audio direction. A visual appearance of the avatar and the spoken statement for the avatar convey a simulated emotional state. | 04-07-2011 |
20110099476 | DECORATING A DISPLAY ENVIRONMENT - Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user. | 04-28-2011 |
20110119590 | SYSTEM AND METHOD FOR PROVIDING A SPEECH CONTROLLED PERSONAL ELECTRONIC BOOK SYSTEM - A system and method in a personal electronic book system for providing speech-controlled operation thereof. As non-limiting examples, an electronic book reader may comprise one or more modules operable to utilize a default set of speech commands and/or develop a suite of customized speech commands to be utilized for controlling operation of the electronic book reader. | 05-19-2011 |
20110138286 | Voice assisted visual search - The invention discloses a method and apparatus for (a) processing a voice input from the user of computer technology, (b) recognizing potential objects of interest, and (c) using electronic displays to present visual artefacts directing user's attention to the spatial locations of the objects of interest. The voice input is matched with attributes of the information objects, which are visually presented to the viewer. If one or several objects match the voice input sufficiently, the system visually marks or highlights the object or objects to help the viewers direct his or her attention to the matching object or objects. The sets of visual objects and their attributes, used in the matching, may be different for different user tasks and types of visually displayed information. If the user views only a portion of a document and user's voice input matches an information object, which is contained in the entire document but not displayed in the current portion, the system displays a visual artefact, which indicates the direction and distance to the object. | 06-09-2011 |
20110138287 | VOICE ACTIVATED SYSTEM AND METHOD TO ENABLE A COMPUTER USER WORKING IN A FIRST GRAPHICAL APPLICATION WINDOW TO DISPLAY AND CONTROL ON-SCREEN HELP, INTERNET, AND OTHER INFORMATION CONTENT IN A SECOND GRAPHICAL APPLICATION WINDOW - A system is disclosed for navigating the display of content in a windowed computing environment, the system comprising a computing device comprising a voice recognition engine, a first window and a second window, wherein the second window comprises at least one hyperlink linked to additional content. A user command receiver receives a voice command from a user while the user is working in the first window, and in response to the voice command follows the hyperlink in the second window while the user remains in productive control of the first window, wherein following the hyperlink in the second window causes the additional content to be displayed in the second window. | 06-09-2011 |
20110271194 | VOICE AD INTERACTIONS AS AD CONVERSIONS - This specification describes technologies relating to content presentation. In general, one aspect of the subject matter described in this specification can be embodied in methods that include the actions of presenting a content item to a user; receiving a user input indicating a voice interaction; receiving a voice input from the user; transmitting the voice input to a content system; receiving a command responsive to the voice input; and executing, using one or more processors, the command including modifying the content item. Other embodiments of this aspect include corresponding systems, apparatus, and computer program products. | 11-03-2011 |
20110320950 | User Driven Audio Content Navigation - Systems and associated methods configured to provide user-driven audio content navigation for the spoken web are described. Embodiments allow users to skim audio for content that seems to be of relevance to the user, similar to visual skimming of standard web pages, and mark point of interest within the audio. Embodiments provide techniques for navigating audio content while interacting with information systems in a client-server environment, where the client device can be a simple, standard telephone. | 12-29-2011 |
20120011443 | ENABLING SPEECH WITHIN A MULTIMODAL PROGRAM USING MARKUP - A method for speech enabling an application can include the step of specifying a speech input within a speech-enabled markup. The speech-enabled markup can also specify an application operation that is to be executed responsive to the detection of the speech input. After the speech input has been defined within the speech-enabled markup, the application can be instantiated. The specified speech input can then he detected and the application operation can be responsively executed in accordance with the specified speech-enabled markup. | 01-12-2012 |
20120089914 | USER INTERFACES FOR NAVIGATING STRUCTURED CONTENT - User interfaces for navigating structured content. In one example embodiment, a user interface includes a grid, a header row of cells each positioned in a separate column of the grid, a header column of cells each positioned in a separate row of the grid, a plurality of multi-dimensional cells each having a unique position in the grid, and a viewport that displays only a portion of the grid. Upon of reception an indication that the portion of the grid displayed within the viewport should simultaneously scroll both horizontally and vertically, the multi-dimensional cells of the grid are configured to scroll simultaneously within the viewport both horizontally and vertically, and the header row cells and header column cells of the grid are configured to scroll in a synchronous manner so as to remain visible in the viewport and remain aligned with the rows and columns of multi-dimensional cells. | 04-12-2012 |
20120089915 | Method and Device for Temporally Sequenced Adaptive Recommendations of Activities - A method and device for temporally sequenced recommendations of activities delivers to users temporally sequenced objects comprising user activities, wherein the delivered objects are selected based, at least in part, on inferences of preferences from usage behaviors. The delivered objects may include activities associated with processor-based devices in addition to human activities. Variations of the system and method include delivering the temporally sequenced objects in accordance with the contents of the objects and user feedback with regard to the objects. Information as to why objects were delivered to users may be provided to the users. | 04-12-2012 |
20120096358 | NAVIGATING AN INFORMATION HIERARCHY USING A MOBILE COMMUNICATION DEVICE - Systems and methods are provided for navigating an information hierarchy using a mobile communication device. The method comprises causing a plurality of selectable items to be presented on a display associated with the mobile communication device, in response to receiving, via an audio input device associated with the mobile communication device, a first voice command indicating that one of the plurality of selectable items is to be selected, causing one of the selectable items in the plurality of selectable items to be displayed differently from the other selectable items to thereby form an accentuated selectable item, and, in response to receiving, via the audio input device, a second voice command indicating that the accentuated selectable item is to be selected, causing information associated with the accentuated selectable item to be presented on the display. | 04-19-2012 |
20120110456 | INTEGRATED VOICE COMMAND MODAL USER INTERFACE - A system and method are disclosed for providing a NUI system including a speech reveal mode where visual objects on a display having an associated voice command are highlighted. This allows a user to quickly and easily identify available voice commands, and also enhances an ability of a user to learn voice commands as there is a direct association between an object and its availability as a voice command. | 05-03-2012 |
20120110457 | METHOD AND APPARATUS FOR AUTOMATICALLY UPDATING A PRIMARY DISPLAY AREA - Receiving commands from a remote controller and automatically activating display areas for cursor navigation. Content display areas within a display frame respectively correspond to a variety of content items and include a primary display area wherein cursor navigation is activated and secondary display areas wherein cursor navigation is prevented. Remote controller navigational commands, for example, then allow cursor based navigation for the content item currently displayed in the primary display area. A content selection command such as a number key input of the remote controller allows immediate and automatic updating of the primary display area to include a desired content item that is associated to the command (e.g., the particular number). | 05-03-2012 |
20120278719 | METHOD FOR PROVIDING LINK LIST AND DISPLAY APPARATUS APPLYING THE SAME - A method of providing a list of links on a display apparatus and a display apparatus are provided. The method includes recognizing a voice spoken by a user, searching, among links included in a web page being currently displayed on the display apparatus, for a link including an index which coincides with the voice spoken by the user and generating a list of one or more links, each including the index which coincides with the voice spoken by the user. | 11-01-2012 |
20120278720 | INFORMATION PROCESSING APPARATUS, METHOD AND PROGRAM - An information processing apparatus includes an imaging unit, an icon display control unit causing a display to display an operation icon, a pickup image display processing unit causing the display to sequentially display an input operation region image constituted by, among pixel regions constituting an image picked up by the imaging unit, a pixel region including at least a portion of a hand of a user, an icon management unit managing event issue definition information, which is a condition for determining that the operation icon has been operated by the user, for each operation icon, an operation determination unit determining whether the user has operated the operation icon based on the input operation region image displayed in the display and the event issue definition information, and a processing execution unit performing predetermined processing corresponding to the operation icon in accordance with a determination result by the operation determination unit. | 11-01-2012 |
20120297304 | Adaptive Operating System - An adaptive operating system is described that adjusts a set of applications and/or a set of application icons presented on a user interface based on ambient noise and/or ambient light conditions at the mobile device. In some implementations, a sensor on a mobile device can detect the amount of ambient noise and/or light at the mobile device and adjust the presentation of sound-related and/or light-related applications or application icons on a graphical interface of the mobile device. In some implementations, a set of applications and/or a set of application icons presented on a user interface can be adjusted based on movement of the mobile device detected by a motion sensor of the mobile device. | 11-22-2012 |
20120304067 | APPARATUS AND METHOD FOR CONTROLLING USER INTERFACE USING SOUND RECOGNITION - An apparatus and method for controlling a user interface using sound recognition are provided. The apparatus and method may detect a position of a hand of a user from an image of the user, and may determine a point in time for starting and terminating the sound recognition, thereby precisely classifying the point in time for starting the sound recognition and the point in time for terminating the sound recognition without a separate device. Also, the user may control the user interface intuitively and conveniently. | 11-29-2012 |
20120324356 | User Driven Audio Content Navigation - Systems and associated methods configured to provide user-driven audio content navigation for the spoken web are described. Embodiments allow users to skim audio for content that seems to be of relevance to the user, similar to visual skimming of standard web pages, and mark point of interest within the audio. Embodiments provide techniques for navigating audio content while interacting with information systems in a client-server environment, where the client device can be a simple, standard telephone. | 12-20-2012 |
20130019175 | SUBMENUS FOR CONTEXT BASED MENU SYSTEM - One or more submenus associated with context based menus are provided. A context based menu may include top level commands/items available for execution on selected content or activation of submenu(s) that include additional executable commands. Additional commands may be executed through the submenu(s) by tap, swipe, or press and hold actions. Upon selection of a termination item or execution of a command, a submenu may be hidden and/or a parent menu displayed. | 01-17-2013 |
20130019176 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAMAANM Miyashita; KenAACI TokyoAACO JPAAGP Miyashita; Ken Tokyo JPAANM Hishinuma; TomohikoAACI KanagawaAACO JPAAGP Hishinuma; Tomohiko Kanagawa JPAANM Ohki; YoshihitoAACI TokyoAACO JPAAGP Ohki; Yoshihito Tokyo JPAANM Morimoto; RyoheiAACI KanagawaAACO JPAAGP Morimoto; Ryohei Kanagawa JPAANM Ono; JunyaAACI KanagawaAACO JPAAGP Ono; Junya Kanagawa JP - An information processing apparatus includes a display, an input unit, and a controller. The input unit is configured to receive an input of a first keyword from a user. The controller is configured to retrieve first character information including the input first keyword from a database configured to store a plurality of character information items converted from a plurality of voice information items by voice recognition processing, extract a second keyword that is included in the first character information acquired by the retrieval and is different from the first keyword, and control the display to display a list of items including first identification information with which the acquired first character information is identified and the second keyword included in the first character information. | 01-17-2013 |
20130036357 | SYSTEMS AND METHODS FOR AUTOMATICALLY SWITCHING ON AND OFF A "SCROLL-ON OUTPUT" MODE - Systems and methods for automatically controlling “scroll-on output” (“SOO”) operations of a computing device ( | 02-07-2013 |
20130080898 | SYSTEMS AND METHODS FOR ELECTRONIC COMMUNICATIONS - Embodiments of the invention provide a system for enhancing user interaction with objects connected to a network. The system includes a processor, a display screen, a memory coupled to the processor. The memory comprises a database including a list of two or more objects and instructions executable by the processor to display a menu. The menu is associated with at least two independent objects. And the two independent objects are produced by two independent vendors. | 03-28-2013 |
20130132845 | Spatial Visual Effect Creation And Display Such As For A Screensaver - Techniques are presented that include determining, using signals captured from two or more microphones configured to detect an acoustic signal from one or more sound sources, one or more prominent sound sources based on the one or more sound sources. The techniques also include determining one or more directions relative to a position of one or more of the two or more microphones for prominent sound source(s). The techniques further include outputting information suitable to be viewed on a display, the information providing for the prominent sound source(s) a visual effect indicating at least in part the one or more directions, relative to a position of one or more of the microphones, of the prominent sound source(s) in the acoustic signal. The information and the corresponding visual effect(s) may be presented on a display, e.g., as part of a screensaver. | 05-23-2013 |
20130145272 | SYSTEM AND METHOD FOR PROVIDING AN INTERACTIVE DATA-BEARING MIRROR INTERFACE - An interactive interface of an embodiment of the present invention comprises a mirror surface; a sensor configured to receive an input from a user; a processor communicatively coupled to the sensor; the processor configured to identify a user identification based on the input, retrieve user specific content associated with the user identification; and identify one or more interactions with the user, wherein the processor comprises a speech processor and a video processor; and an output configured to display content associated with the user identification and responsive to the interactions on the mirror surface. | 06-06-2013 |
20130185640 | COMPUTERIZED INFORMATION AND DISPLAY APPARATUS - A computerized information and display apparatus useful for providing information to a user via a display. In one embodiment, the apparatus comprises a processor and network interface and computer readable medium having at least one computer program disposed thereon, the at least one program being configured to receive a speech input from the user, and obtain information relating to the input. In one variant, at least a portion of the information is obtained via the network interface from a remote server, and the apparatus includes two components in wireless communication with one another. | 07-18-2013 |
20130205214 | SMART INFORMATION AND DISPLAY APPARATUS - A computerized information apparatus useful for providing information to a user via a display. In one embodiment, the apparatus comprises a processor and network interface and computer readable medium having at least one computer program disposed thereon, the at least one program being configured to receive a speech input from the user, and obtain information relating to the input. In one variant, at least a portion of the information is obtained via the network interface from a remote server. An information and control system for personnel transport devices. In one embodiment, the information and control system is coupled to the elevator system of a building, and includes a touch panel input device, a flat panel display having a touch sensitive screen, and speech recognition and synthesis systems serving each elevator car. The speech recognition and synthesis systems and input device(s) are operatively coupled to a processor and storage devices having a plurality of different types of data stored thereon. Each elevator car is also a client connected to a LAN, WAN, intranet, or Internet, and capable of exchanging data with and retrieving data therefrom. Functions performed by the information and control system include a voice-actuated building directory, download of selected data to personal electronic devices (PEDs), monitoring of areas adjacent to the elevator car on destination floors, and control of lighting and security monitoring in selectable areas of destination floors. The system is also optionally fitted with an RFID interrogator/reader capable of recognizing RFID tags carried by passengers on the elevator, thereby granting access to various controlled locations automatically after password authentication. The RFID system also allows the authenticated passenger(s) to control utilities such as lighting and HVAC within specific zones on their destination floors. The information and control system is also optionally equipped with an occupancy estimating sub-system which allows elevator cars to bypass calling floors when their capacity is reached or exceeded. A computerized information apparatus useful for providing information to a user via a display. In one embodiment, the apparatus comprises a processor and network interface and computer readable medium having at least one computer program disposed thereon, the at least one program being configured to receive a speech input from the user, and obtain information relating to the input. In one variant, at least a portion of the information is obtained via the network interface from a remote server. | 08-08-2013 |
20130219277 | Gesture and Voice Controlled Browser - A computer readable storage medium stores instructions defining a mobile device browser. The mobile device browser supports direct command inputs and executable instructions to correlate a proxy command to a selected direct command input. The proxy command is alternately expressed as a gesture and a voice command. The selected direct command input is automatically executed by the mobile device browser. | 08-22-2013 |
20130227418 | CUSTOMIZABLE GESTURES FOR MOBILE DEVICES - Users are enabled to define and modify mappings between ( | 08-29-2013 |
20130227419 | APPARATUS AND METHOD FOR SWITCHING ACTIVE APPLICATION - An apparatus to switch an application includes an input unit to receive an input for switching a foreground application, the input including an application distinguishing portion associated with an application switching portion, a control unit to determine an application to be run in the foreground among the applications running in a background, the application distinguishing portion corresponding to the application, and an output unit to output the application in a display as the foreground application. | 08-29-2013 |
20130239000 | Searchlight Navigation Using Headtracker To Reveal Hidden or Extra Document Data - In one embodiment, a method for displaying a user interface on a display of a head worn computer can include displaying a first layer of information in the user interface on a display of the head worn computer. The method can further include receiving a directional input from body movement, eye tracking, or hand gestures. The method can additionally include highlighting an area of the user interface on the display with a second layer of information. The area can be located in the user interface based on the received directional input. | 09-12-2013 |
20130246920 | METHOD OF ENABLING VOICE INPUT FOR A VISUALLY BASED INTERFACE - A method of enabling voice input for a graphical user interface (GUI) based application on an electronic device. The method includes: obtaining required properties of one or more user interface objects of the GUI-based application, wherein the one or more user interface objects include one or more input objects; receiving a voice input; extracting from the voice input one or more elements; associating the one or more elements with the one or more input objects; identifying, based on said associating, an input object having a required property which is not satisfied; and outputting, based on the required property, audio output for a prompt for a further voice input. | 09-19-2013 |
20130275875 | Automatically Adapting User Interfaces for Hands-Free Interaction - The method includes automatically, without user input and without regard to whether a digital assistant application has been separately invoked by a user, determining that the electronic device is in a vehicle. In some implementations, determining that the electronic device is in a vehicle comprises detecting that the electronic device is in communication with the vehicle (e.g., via a wired or wireless communication techniques and/or protocols). The method also includes, responsive to the determining, invoking a listening mode of a virtual assistant implemented by the electronic device. In some implementations, the method also includes limiting the ability of a user to view visual output presented by the electronic device, provide typed input to the electronic device, and the like. | 10-17-2013 |
20130283167 | Flip-Through Format to View Notification and Related Items - Embodiments relate to systems and methods providing a flip-though format for viewing notification of messages and related items on devices, for example personal mobile devices such as smart phones. According to an embodiment, an unread item most recently received is shown in full screen on the mobile device. While the user is viewing this item, the device will automatically retrieve and load into a cache memory, the next most recently received item. When the user is done viewing the item most recently received, the user can swipe a finger across the touch screen to trigger a page flipping animation and display of the next most recently received item. Embodiments avoid the user having to click back and forth between a list of notifications/links and corresponding notification items. | 10-24-2013 |
20130283168 | Conversation User Interface - A conversation user interface enables users to better understand their interactions with computing devices, particularly when speech input is involved. The conversation user interface conveys a visual representation of a conversation between the computing device, or virtual assistant thereon, and a user. The conversation user interface presents a series of dialog representations that show input from a user (verbal or otherwise) and responses from the device or virtual assistant. Associated with one or more of the dialog representations are one or more graphical elements to convey assumptions made to interpret the user input and derive an associated response. The conversation user interface enables the user to see the assumptions upon which the response was based, and to optionally change the assumption(s). Upon change of an assumption, the conversation GUI is refreshed to present a modified dialog representation of a new response derived from the altered set of assumptions. | 10-24-2013 |
20130283169 | VOICE-BASED VIRTUAL AREA NAVIGATION - Examples of systems and methods for voice-based navigation in one or more virtual areas that define respective persistent virtual communication contexts are described. These examples enable communicants to use voice commands to, for example, search for communication opportunities in the different virtual communication contexts, enter specific ones of the virtual communication contexts, and bring other communicants into specific ones of the virtual communication contexts. In this way, these examples allow communicants to exploit the communication opportunities that are available in virtual areas, even when hands-based or visual methods of interfacing with the virtual areas are not available. | 10-24-2013 |
20130326353 | SYSTEM AND METHOD FOR CONTEXT DRIVEN VOICE INTERFACE IN HANDHELD WIRELES MOBILE DEVICES - A wireless communication device with a voice-input and display-touch interface has an interface processor that enables, in part (i) an either display-touch or a voice-input based interface, and in part (ii) only a voice-input based interface for efficiently searching information databases. A sequence of context based search verb and search term is selected via either touch or voce selection and then the human articulated voice query is expanded using a culture and a world intelligence dictionary for conducting more efficient searches though a voice-based input. | 12-05-2013 |
20130339858 | Apparatus and Methods for Managing Resources for a System Using Voice Recognition - The technology of the present application provides a method and apparatus to manage speech resources. The method includes detecting a change in a speech application that requires the use of different resources. On detection of the change, the method loads the different resources without the user needing to exit the currently executing speech application. The apparatus provides a switch (which could be a physical or virtual switch) that causes a speech recognition system to identify audio as either commands or text. | 12-19-2013 |
20130339859 | INTERACTIVE NETWORKED HEADPHONES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for connecting an interactive wearable device with a network. In one aspect, a method includes loading content from a playlist; recognizing contextual information relating to the content; determining the location of the user; requesting supplemental content via a network based on the contextual information and the location; displaying supplemental information to a user; interacting with the supplemental information at least in part via an interactive headphone. | 12-19-2013 |
20130346867 | SYSTEMS AND METHODS FOR AUTOMATICALLY GENERATING A MEDIA ASSET SEGMENT BASED ON VERBAL INPUT - Systems and methods for automatically generating a media asset segment based on verbal input are provided. Verbal input is received from a user while a media asset is being presented to the user. The verbal input is processed to extract an instruction and comment information included in the verbal input. The instruction is cross-referenced with a command database to determine whether the instruction corresponds to a segment generation command. In response to determining the instruction corresponds to the segment generation command, a segment that includes a portion of the media asset that was presented to the user when the verbal input was received is generated. The comment information is associated with the generated segment. A message that includes the generated segment and the associated comment information is transmitted to a remote server. | 12-26-2013 |
20140033045 | GESTURES COUPLED WITH VOICE AS INPUT METHOD - A user interface is provided for one or more users to interact with a computer using gestures coupled with voice to navigate a network that is displayed on the computer screen by the computer application software. The combination of a gesture with a voice command is used improve the reliability of the interpretation of the intent of the user. In addition, the active user who is allowed to control the software is identified through the combined input and the movements of other users are discarded. | 01-30-2014 |
20140040745 | METHODS AND APPARATUS FOR VOICED-ENABLING A WEB APPLICATION - Methods and apparatus for voice-enabling a web application, wherein the web application includes one or more web pages rendered by a web browser on a computer. At least one information source external to the web application is queried to determine whether information describing a set of one or more supported voice interactions for the web application is available, and in response to determining that the information is available, the information is retrieved from the at least one information source. Voice input for the web application is then enabled based on the retrieved information. | 02-06-2014 |
20140040746 | METHODS AND APPARATUS FOR VOICED-ENABLING A WEB APPLICATION - Methods and apparatus for voice-enabling a web application, wherein the web application includes one or more web pages rendered by a web browser on a computer. At least one information source external to the web application is queried to determine whether information describing a set of one or more supported voice interactions for the web application is available, and in response to determining that the information is available, the information is retrieved from the at least one information source. Voice input for the web application is then enabled based on the retrieved information. | 02-06-2014 |
20140040747 | METHOD FOR DISPLAYING CONTENT ITEMS ON AN ELECTRONIC DEVICE - Content items can be viewed on an electronic device based upon a property defined for each of the content items, allowing the user to navigate through the content list and view or select content items. When navigating portions of the list where a selection has been made where no content items are associated with the properties, the result may be the presentation of no data. In order to re-orient users, the selection is modified to display at least one content item. | 02-06-2014 |
20140040748 | Interface for a Virtual Digital Assistant - The digital assistant displays a digital assistant object in an object region of a display screen. The digital assistant then obtains at least one information item based on a speech input from a user. Upon determining that the at least one information item can be displayed in its entirety in the display region of the display screen, the digital assistant displays the at least one information item in the display region, where the display region and the object region are not visually distinguishable from one another. Upon determining that the at least one information item cannot be displayed in its entirety in the display region of the video display screen, the digital assistant displays a portion of the at least one information item in the display region, where the display region and the object region are visually distinguishable from one another. | 02-06-2014 |
20140040749 | SYSTEM AND METHOD OF CONTROLLING A GRAPHICAL USER INTERFACE AT A WIRELESS DEVICE - A method of controlling a graphical user interface (GUI) at a wireless device is disclosed and includes storing a set of audio GUI controls at an interactive voice response server and creating an audio GUI control string that is to be communicated to the wireless device within a voice stream. The audio GUI control string corresponds to a text string that is selectably presentable at the wireless device. Further, the method can include embedding the audio GUI control string within the voice stream. Additionally, the method can include transmitting the voice stream with the embedded audio GUI control string to the wireless device. | 02-06-2014 |
20140082501 | CONTEXT AWARE SERVICE PROVISION METHOD AND APPARATUS OF USER DEVICE - A context aware service provision method and apparatus for recognizing the user context and executing an action corresponding to the user context according to a rule defined by the user and feeding back the execution result to the user interactively are provided. The method for providing a context-aware service includes receiving a user input, the user input being at least one of a text input and a speech input, identifying a rule including a condition and an action corresponding to the condition based on the received user input, activating the rule to detect a context which corresponds to the condition of the rule, and executing, when the context is detected, the action corresponding to the condition. | 03-20-2014 |
20140096004 | BROWSER, AND VOICE CONTROL METHOD AND SYSTEM FOR BROWSER OPERATION - A voice control method and system for browser operations are described. The method comprises the steps of: receiving an inputted voice control command; the command field value found in the predetermined web page template is the template entry of the voice control command wherein the predetermined web page template includes a plurality of template entries and each of the template entries contains an element field, a command field, and an operation field; and searching an element in a current web page wherein the element corresponds to the value of the element field in the template entry such that the element executes the operation corresponding to the operation field. The present method performs the voice control according to the web page content, thus further improving the voice experience effect upon the user. | 04-03-2014 |
20140101553 | MEDIA INSERTION INTERFACE - A computing device may output a graphical user interface for display at a presence-sensitive screen including an edit region and a graphical keyboard. The computing device may receive an indication of a gesture detected at a location of the presence-sensitive screen within the graphical keyboard. In response, the computing device may output for display at the presence-sensitive screen, a modified graphical user interface including a media insertion user interface with a plurality of media insertion options. The computing device may receive an indication of a selection of at least one media insertion option associated with a media item. The computing device may output for display at the presence-sensitive screen, an updated graphical user interface including the media item within the edit region. | 04-10-2014 |
20140108935 | Voice Commands for Online Social Networking Systems - In one embodiment, a method includes accessing a social graph that includes a plurality of nodes and edges, receiving from a first user a voice message comprising one or more commands, receiving location information associated with the first user, identifying edges and nodes in the social graph based on the location information, where each of the identified edges and nodes corresponds to at least one of the commands of the voice message, and generating new nodes or edges in the social graph based on the identified nodes or identified edges. | 04-17-2014 |
20140136981 | METHODS AND APPARATUSES FOR PROVIDING TANGIBLE CONTROL OF SOUND - Methods and apparatuses for providing tangible control of sound are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones) configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions. Users may directly adjust parameters related to sound projection patterns by interacting with the touch surface while receiving visual feedback by exercising one or more commands on the touch surface. The commands may be adjusted according to visual feedback received from the change of the display on the touch surface. | 05-15-2014 |
20140149870 | MODIFYING KEY FUNCTIONALITY BASED ON CONTEXT AND INPUT ASSOCIATED WITH A USER INTERFACE - Methods and devices are provided for modifying key functionality based on context of a user interface. A system configured to practice the method presents a graphical user interface, the graphical user interface providing a user interface element for data entry, and analyzes a user interaction via the user interface element and a context of the user interaction within the user interface element. Based on the user interaction and the context, the system activates a non-default mode for a key having a default mode. The key can be a physical key or a key in a virtual on-display keyboard. The default mode can be inserting a character such as a period or a comma, and the non-default mode can be launching voice input. The system receives, via the key, input directed to the user interface element, and performs an action associated with the non-default mode in response to the input. | 05-29-2014 |
20140157129 | METHODS AND SYSTEMS FOR GESTURE-BASED PETROTECHNICAL APPLICATION CONTROL - Gesture-based petrotechnical application control. At least some embodiments involve controlling the view of a petrotechnical application by capturing images of a user; creating a skeletal map based on the user in the images; recognizing a gesture based on the skeletal map; and implementing a command based on the recognized gesture. | 06-05-2014 |
20140164928 | MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, which facilitates the control of the mobile terminal using user's eyes and gesture and which minimizes user's touch inputs for manipulating the mobile terminal. The present invention includes a camera, a microphone, a display unit, and a controller determining a location faced by eyes of a user on the display unit, the controller, if determining an object displayed on the determined location, performing a function corresponding to at least one of a voice recognized via the microphone and a gesture taken via the camera on the object. | 06-12-2014 |
20140173440 | SYSTEMS AND METHODS FOR NATURAL INTERACTION WITH OPERATING SYSTEMS AND APPLICATION GRAPHICAL USER INTERFACES USING GESTURAL AND VOCAL INPUT - Systems and methods for natural interaction with graphical user interfaces using gestural and vocal input in accordance with embodiments of the invention are disclosed. In one embodiment, a method for interpreting a command sequence that includes a gesture and a voice cue to issue an application command includes receiving image data, receiving an audio signal, selecting an application command from a command dictionary based upon a gesture identified using the image data, a voice cue identified using the audio signal, and metadata describing combinations of a gesture and a voice cue that form a command sequence corresponding to an application command, retrieving a list of processes running on an operating system, selecting at least one process based upon the selected application command and the metadata, where the metadata also includes information identifying at least one process targeted by the application command, and issuing an application command to the selected process. | 06-19-2014 |
20140181672 | INFORMATION PROCESSING METHOD AND ELECTRONIC APPARATUS - The present disclosure discloses an information processing method and an electronic apparatus, for solving a technical problem that when a user inputs a voice instruction different from the fixed voice instruction due to incapability of remembering the fixed voice instruction, the electronic devices cannot respond or will mistakenly respond. The method including determining a current application corresponding to a current application interface on the display unit as a first application, the current application interface being a first application interface to which the first application corresponds; obtaining M input objects on the first application interface, M being an integer equal to or larger than one; processing the M input objects to obtain M pieces of prompt information of M character phrases corresponding to the M input objects, the M pieces of prompt information being capable of being displayed on the display unit. | 06-26-2014 |
20140189518 | MOBILE TERMINAL - A mobile terminal is disclosed. The mobile terminal may determine whether an application is executed when sensing the user's command and whether the user views a screen on which the application is executed, and based on a result of the determination, may vary a form in which the result of the execution of the specific function is displayed. | 07-03-2014 |
20140208209 | ELECTRONIC DEVICE AND METHOD OF CONTROLLING THE SAME - An electronic device including a touchscreen; a voice recognition module; and a controller configured to receive a voice input through the voice recognition module when the voice recognition module has been activated, convert the voice input into text, display an object indicator for editing a preset word included in the text, receive a selection of the object indicator, provide an editing option for changing the displayed object indicator into new text to be displayed on the touchscreen, and display the new text when the editing option is selected. | 07-24-2014 |
20140208210 | DISPLAYING SPEECH COMMAND INPUT STATE INFORMATION IN A MULTIMODAL BROWSER - Methods, systems, and products are disclosed for displaying speech command input state information in a multimodal browser including displaying an icon representing a speech command type and displaying an icon representing the input state of the speech command. In typical embodiments, the icon representing a speech command type and the icon representing the input state of the speech command also includes attributes of a single icon. Typical embodiments include accepting from a user a speech command of the speech command type, changing the input state of the speech command, and displaying another icon representing the changed input state of the speech command. Typical embodiments also include displaying the text of the speech command in association with the icon representing the speech command type. | 07-24-2014 |
20140237366 | CONTEXT-AWARE AUGMENTED REALITY OBJECT COMMANDS - Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user. | 08-21-2014 |
20140237367 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - There is provided a mobile terminal and a method of controlling a mobile terminal. The mobile terminal according to the one embodiment analyzes a voice signal received through the audio input unit, when going to a voice recognition mode, selects at least one application to be executed, and at least one item of content to be used in the application, according to the analyzed voice signal, wherein the at least one item of content is selected from items of content displayed on the touch screen, and executes the selected at least one application by using the selected at least one item of content according to the analyzed voice signal. | 08-21-2014 |
20140245154 | Zolog Intelligent Human Language Interface For Business Software Applications - A method of creating ZOLOG BAR software, laid above a business software application, which is used to provide instructions to business software application is disclosed. The method comprises programming a signal processor hardware to host the ZOLOG Technology Software; enabling users to provide inputs through the ZOLOG BAR in one or more of a most intuitive verbal and written human way for carrying out a particular task by which instructions are sent to business software applications, processing the instructions and present the summary for review to the user. | 08-28-2014 |
20140245155 | METHOD FOR PROVIDING A VOICE-SPEECH SERVICE AND MOBILE TERMINAL IMPLEMENTING THE SAME - A method of providing a voice-speech service in a mobile terminal is provided. The method includes receiving sensing information from a sensor unit, determining whether to set an operating mode of the voice-speech service as a driving mode according to the sensing information, and providing an audible feedback according to pre-stored driving mode setting information when an operating mode of the voice-speech service is set as the driving mode. | 08-28-2014 |
20140245156 | Audio-Visual Navigation and Communication - Communicating information through a user platform by representing, on a user platform visual display, spatial publishing objects as entities static locations within a three-dimensional spatial publishing object space. Each spatial publishing object associated with information, and each presenting a subset of the associated information. Establishing a user presence at a location within the spatial publishing object space. The user presence, in conjunction with a user point-of-view, being navigable by the user in at least a two-dimensional sub-space of the spatial publishing object space. | 08-28-2014 |
20140282005 | Apparatus for message triage - Incoming messages, like incoming wounded on the battlefield, can be initially sorted into groups e.g. a) those which can be or should be treated immediately, b) those which can be treated later, and c) those which should not be treated. Like in a triage unit on a battlefield, it is useful to reduce the amount of effort and increase the speed at which this sort takes place. The present invention allows the user's effort to sort to be reduced to a minimum, with a consequent increase in speed. | 09-18-2014 |
20140282006 | AURAL NAVIGATION OF INFORMATION RICH VISUAL INTERFACES - A method comprising generating, by a computer, a model of a website using user interaction primitives to represent hierarchical and hypertextual structures of the website; generating, by the computer, a linear aural flow of content of the website based upon the model and a set of user constraints; audibly presenting, by the computer, the linear aural flow of the content such that the linear aural flow of content is controlled through the use of user supplied primitives, wherein, the linear aural flow can be turned into a dynamic aural flow based upon the user supplied primitives. | 09-18-2014 |
20140282007 | VOICE CONTROL TO DIAGNOSE INADVERTENT ACTIVATION OF ACCESSIBILITY FEATURES - Methods and systems are provided for diagnosing inadvertent activation of user interface settings on an electronic device. The electronic device receives a user input indicating that the user is having difficulty operating the electronic device. The device then determines whether a setting was changed on the device within a predetermined time period prior to receiving the user input. When a first setting was changed within the predetermined time period prior to receiving the user input, the device restores the changed setting to a prior setting. | 09-18-2014 |
20140282008 | HOLOGRAPHIC USER INTERFACES FOR MEDICAL PROCEDURES - An interactive holographic display system includes a holographic generation module configured to display a holographically rendered anatomical image. A localization system is configured to define a monitored space on or around the holographically rendered anatomical image. One or more monitored objects have their position and orientation monitored by the localization system such that coincidence of spatial points between the monitored space and the one or more monitored objects triggers a response in the holographically rendered anatomical image. | 09-18-2014 |
20140289632 | PICTURE DRAWING SUPPORT APPARATUS AND METHOD - According to an embodiment, a picture drawing support apparatus includes following components. The feature extractor extracts a feature amount from a picture drawn by a user. The speech recognition unit performs speech recognition on speech input by the user. The keyword extractor extracts at least one keyword from a result of the speech recognition. The image search unit retrieves one or more images corresponding to the at least one keyword from a plurality of images prepared in advance. The image selector selects an image which matches the picture, from the one or more images based on the feature amount. The image deformation unit deforms the image based on the feature amount to generate an output image. The presentation unit presents the output image. | 09-25-2014 |
20140289633 | METHOD AND ELECTRONIC DEVICE FOR INFORMATION PROCESSING - The present disclosure provides a method and an electronic device for information processing. The electronic device comprises a sensing unit and a display unit having a display area. The display unit displays a graphical interface. The display area displays a first part of the graphical interface. The method comprises: detecting a first operation by the sensing unit when the display unit displays the first part of the graphical interface; displaying the second part of the graphical interface on the display unit in response to the first operation; detecting a second operation; determining whether a preset condition is satisfied during the detecting of the second operation to obtain first decision information; and displaying a speech control on the display unit when the first decision information indicates that the preset condition is satisfied during the detecting of the second operation. | 09-25-2014 |
20140298177 | Methods, devices and systems for interacting with a computing device - Example embodiments relate to processing user interactions with a computing device, comprising receiving a user-initiated action performed on a character button, the character button representing a character; determining whether the user-initiated action is performed in a normal an abnormal operating manner. When a normal operating manner is determined, displaying the character on a graphical display. When an abnormal operating manner is determined: identifying a previously entered character preceding the character, activating a microphone and receiving, by the microphone, a spoken word, searching a subset of a database for a textual form of the received spoken word, the subset based on one or more of the character and the previously entered character, and displaying a correct textual form of the spoken word on a graphical display by amending one or more of the character and the previously entered character when one or more of the character and the previously entered character is inconsistent with the textual form of the spoken word found in the searching. | 10-02-2014 |
20140304605 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM - A system that acquires captured voice data corresponding to a spoken command; sequentially analyzes the captured voice data; causes a display to display a visual indication corresponding to the sequentially analyzed captured voice data; and performs a predetermined operation corresponding to the spoken command when it is determined that the sequential analysis of the captured voice data is complete. | 10-09-2014 |
20140304606 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND COMPUTER PROGRAM - An information processing device includes circuitry configured to cause first display information to be displayed in a first format. The circuitry also changes the first display information to be displayed in a second format in response to a voice being recognized. The information processing may also be accomplished with a method and via a non-transient computer readable storage device. | 10-09-2014 |
20140325360 | DISPLAY APPARATUS AND CONTROL METHOD CAPABLE OF PERFORMING AN INITIAL SETTING - A display apparatus which is capable of performing an initial setting and a controlling control method thereof are provided. The display apparatus includes an output unit configured to output a user interface (UI) which is controllable by a plurality of input modes, and a controller configured to set an input mode according to a user feedback type regarding the UI, and configured to output another UI which corresponds to the set input mode. | 10-30-2014 |
20140337740 | METHOD AND APPARATUS FOR SELECTING OBJECT - Provided herein is a method for selecting an object. The method for selecting an object according to an exemplary embodiment includes displaying a plurality of objects on a screen, recognizing a voice uttered by a user and tracking an eye of the user with respect to the screen, and selecting at least one object from among the plurality of objects on the screen based on the recognized user's voice and the tracked eye. | 11-13-2014 |
20140337741 | APPARATUS AND METHOD FOR AUDIO REACTIVE UI INFORMATION AND DISPLAY - A method includes determining, using signals captured from two or more microphones ( | 11-13-2014 |
20140344701 | METHOD AND SYSTEM FOR IMAGE REPORT INTERACTION FOR MEDICAL IMAGE SOFTWARE - A system and method for image based report correction for medical image software, which incorporates such report correction as part of the report generation process. Such a system and method features a report generator, a report correction functionality and also some type of medical image software, for providing medical image processing capabilities, which allows the doctor or other medical personnel to generate the report, and as part of the report generation process, to be checked by the report correction functionality. | 11-20-2014 |
20140365896 | FRAMEWORKS, DEVICES AND METHODS CONFIGURED FOR ENABLING A MULTI-MODAL USER INTERFACE CONFIGURED TO DISPLAY FACILITY INFORMATION - Described herein are frameworks, devices and methods configured for enabling display for facility information and content, in some cases via touch/gesture controlled interfaces. Embodiments of the invention have been particularly developed for allowing an operator to conveniently access a wide range of information relating to a facility via, for example, one or more wall mounted displays. While some embodiments will be described herein with particular reference to that application, it will be appreciated that the invention is not limited to such a field of use, and is applicable in broader contexts. | 12-11-2014 |
20140372892 | ON-DEMAND INTERFACE REGISTRATION WITH A VOICE CONTROL SYSTEM - Embodiments of the present invention automatically register user interfaces with a voice control system. Registering the interface allows interactive elements within the interface to be controlled by a user's voice. A voice control system analyzes audio including voice commands spoken by a user and manipulates the user interface in response. The automatic registration of a user interface with a voice control system allows a user interface to be voice controlled without the developer of the application associated with the interface having to do anything. Embodiments of the invention allow an application's interface to be voice controlled without the application needing to account for states of the voice control system. | 12-18-2014 |
20140380169 | LANGUAGE INPUT METHOD EDITOR TO DISAMBIGUATE AMBIGUOUS PHRASES VIA DIACRITICIZATION - Disclosed are methods for disambiguating an input phrase or group of words. An implementation may include receiving a phrase as an input to a processor. The received phrase may be presented on a display device. The received phrase may be determined to be ambiguous based on a threshold uncertainty in either a definition or a pronunciation related to the phrase. An indication may be provided that a word in the phrase is the cause of the ambiguity. A menu of words with each word incorporating at least one diacritic mark to a word in the received phrase to disambiguate the received phrase may be presented. A word from the menu of words may be selected and presented on the display device. | 12-25-2014 |
20140380170 | Location-Based Responses to Telephone Requests - A method for receiving processed information at a remote device is described. The method includes transmitting from the remote device a verbal request to a first information provider and receiving a digital message from the first information provider in response to the transmitted verbal request. The digital message includes a symbolic representation indicator associated with a symbolic representation of the verbal request and data used to control an application. The method also includes transmitting, using the application, the symbolic representation indicator to a second information provider for generating results to be displayed on the remote device. | 12-25-2014 |
20150012829 | METHOD AND APPARATUS FOR FACILITATING VOICE USER INTERFACE DESIGN - A computer implemented method and an apparatus for facilitating voice user interface (VUI) design are provided. The method comprises identifying a plurality of user intentions from user interaction data. The method further comprises associating each user intention with at least one feature from among a plurality of features. One or more features from among the plurality of features are extracted from natural language utterances associated with the user interaction data. Further, the method comprises computing a plurality of distance metrics corresponding to pairs of user intentions from among the plurality of user intentions. A distance metric is computed for each pair of user intentions from among the pairs of user intentions. Furthermore, the method comprises generating a plurality of clusters based on the plurality of distance metrics. Each cluster comprises a set of user intentions. The method further comprises provisioning a VUI design recommendation based on the plurality of clusters. | 01-08-2015 |
20150019974 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device including a processor configured to realize an address term definition function of defining an address term for at least a partial region of an image to be displayed on a display, a display control function of displaying the image on the display and temporarily displaying the address term on the display in association with the region, a voice input acquisition function of acquiring a voice input for the image, and a command issuing function of issuing a command relevant to the region when the address term is included in the voice input. | 01-15-2015 |
20150019975 | CONTACT SELECTOR THAT FACILITATES GRANULAR SHARING OF CONTACT DATA - Described herein are technologies pertaining to transmitting electronic contact data from a first application to a second application by way of an operating system without generating a centralized contact store or providing the second application with programmatic access to all electronic contact data retained by first application. | 01-15-2015 |
20150026579 | METHODS AND SYSTEMS FOR PROCESSING CROWDSOURCED TASKS - The disclosed embodiments illustrate methods and systems for processing one or more crowdsourced tasks. The method comprises converting an audio input received from a crowdworker to one or more phrases by one or more processors in at least one computing device. The audio input is at least a response to a crowdsourced task. A mode of the audio input is selected based on one or more parameters associated with the crowdworker. Thereafter, the one or more phrases are presented on a display of the at least one computing device by the one or more processors. Finally, one of the one or more phrases is selected by the crowdworker as a correct response to the crowdsourced task. | 01-22-2015 |
20150026580 | METHOD AND DEVICE FOR COMMUNICATION - A system of communicating between first and second electronic devices, comprises, in a first device, receiving from a second device, voice representative information acquired by the second device, and connection information indicating characteristics of communication to be used in establishing a communication link with the second device. The system compares the voice representative information with predetermined reference voice representative information and in response to the comparison, establishes a communication link with the second device by using the connection information received from the second device. | 01-22-2015 |
20150033128 | Multi-Dimensional Surgical Safety Countermeasure System - A multi-dimensional surgical safety countermeasure system and method for using automated checklists to provide information to surgical staff in a surgical procedure. The system and method involve using checklists and receiving commands through the prompts of the checklists to update the information displayed on the display to guide the performance of a medical procedure. | 01-29-2015 |
20150033129 | MOBILE TERMINAL AND METHOD OF CONTROLLING THE SAME - A mobile terminal including a camera; a display unit configured to display an image input through the camera; and a controller configured to display at least one user-defined icon corresponding to linked image-setting information, receive a touch signal indicating a touch is applied to a corresponding user-defined icon, and control the camera to capture the image based on image-setting information linked to the corresponding user-defined icon in response to the received touch signal. | 01-29-2015 |
20150033130 | AUDIO INPUT FROM USER - A computing device detects a user viewing the computing device and outputs a cue if the user is detected to view the computing device. The computing device receives an audio input from the user if the user continues to view the computing device for a predetermined amount of time. | 01-29-2015 |
20150040012 | VISUAL CONFIRMATION FOR A RECOGNIZED VOICE-INITIATED ACTION - Techniques described herein provide a computing device configured to provide an indication that the computing device has recognized a voice-initiated action. In one example, a method is provided for outputting, by a computing device and for display, a speech recognition graphical user interface (GUI) having at least one element in a first visual format. The method further includes receiving, by the computing device, audio data and determining, by the computing device, a voice-initiated action based on the audio data. The method also includes outputting, while receiving additional audio data and prior to executing a voice-initiated action based on the audio data, and for display, an updated speech recognition GUI in which the at least one element is displayed in a second visual format, different from the first visual format, to indicate that the voice-initiated action has been identified. | 02-05-2015 |
20150046825 | Method and Apparatus for Improving One-handed Operation of a Large Smartphone or a Small Tablet Computer - The present disclosure involves a method of improving one-handed operation of a mobile computing device. A first visual content is displayed on a screen of the mobile computing device. The first visual content occupies a substantial entirety of a viewable area of the screen. While the first visual content is being displayed, an action performed by a user to the mobile computing device is detected. The first visual content is scaled down in response to the detected action and displayed on the screen. The scaled-down first visual content occupies a fraction of the viewable area of the screen. A user interaction with the scaled-down first visual content is then detected. In response to the user interaction, a second visual content is displayed on the screen. The second visual content is different from the first visual content and occupies a substantial entirety of the viewable area of the screen. | 02-12-2015 |
20150067515 | ELECTRONIC DEVICE, CONTROLLING METHOD FOR SCREEN, AND PROGRAM STORAGE MEDIUM THEREOF - An electronic device, a controlling method of a screen, and a program storage medium thereof are provided. The screen includes a display panel and a touch-sensitive panel. The display panel shows a root window on which all display contents are shown. The controlling method comprises the following steps. A command signal is received. The coordinate system of the screen is transformed with a transformation according to the command signal. | 03-05-2015 |
20150067516 | DISPLAY DEVICE AND METHOD OF OPERATING THE SAME - A wearable electronic device including a wireless communication unit configured to be wirelessly connected to a projector for projecting a stored presentation onto a screen of an external device; a main body configured to be worn by a user; a microphone integrally connected to the main body; a display unit configured to be attached to the main body; and a controller configured to match voice information input through the microphone with corresponding contents of the stored presentation, and display at least a following portion of content that follow the corresponding contents on the display unit. | 03-05-2015 |
20150067517 | ELECTRONIC DEVICE SUPPORTING MUSIC PLAYING FUNCTION AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE - An electronic device and a method for controlling the electronic device are provided. The method includes receiving at least one input sound from a user, determining one of a plurality of reference sounds included in a guide track as a device playing sound corresponding to the at least one input sound, and playing the device playing sound. | 03-05-2015 |
20150082175 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING PROGRAM, AND TERMINAL DEVICE - An apparatus includes a receiver, a shared information unit, a transmitter, a voice recognition unit, and an application execution unit. The receiver is configured to receive a voice signal and information from a second apparatus. The shared information unit is configured to create shared information shared by both the apparatus and the second apparatus based on the information received from the second apparatus. The transmitter is configured to transmit the shared information to the second apparatus. The voice recognition unit is configured to analyze the voice signal. The application execution unit is configured to execute an application based on a result generated by the voice recognition unit. | 03-19-2015 |
20150089373 | CONTEXT AWARE VOICE INTERFACE FOR COMPUTING DEVICES - A system and method for facilitating user access to software functionality, such as enterprise-related software applications and associated data. An example method includes receiving language input responsive to one or more prompts; determining, based on the language input, a subject category associated with a computing object, such as a Customer Relationship Management (CRM) opportunity object; identifying an action category pertaining to a software action to be perform pertaining to the computing object; employing identification of the software action to obtain action context information pertaining to the action category; and implementing a software action in accordance with the action context information. Context information pertaining to a software flow and a particular computing object may guide efficient implementation of voice-guided software tasks corresponding to the software flows. | 03-26-2015 |
20150113409 | VISUAL AND VOICE CO-BROWSING FRAMEWORK - A computer system may include logic configured to enable voice-enabled web pages. The logic may be configured to receive a request for a web page that includes Hypertext Markup Language (HTML) content and voice browser content from an HTML browser running on a user device; generate a co-browsing session identifier based on the received request; provide a response to the HTML browser, wherein the response includes the HTML content, the generated co-browsing session identifier, and an instruction to establish a Web Real-Time Communication (WebRTC) connection with an interactive voice response (IVR) system associated with the voice browser content; receive an indication from the IVR system that the WebRTC connection has been established for the co-browsing session identifier; and provide the voice browser content to a voice browser in the IVR system, in response to receiving the indication that the WebRTC connections has been established for the co-browsing session identifier. | 04-23-2015 |
20150113410 | ASSOCIATING A GENERATED VOICE WITH AUDIO CONTENT - Audio files representing files intended primarily for viewing (e.g., by sighted users) are created and organized into hierarchies that mimic those of the original files as instantiated at original websites incorporating such files. Thus, visually impaired users are provided access to and navigation of the audio files in a way that mimics the original website. | 04-23-2015 |
20150121229 | Method for Processing information and Electronic Apparatus - A method for processing information applied in an electronic apparatus is provided. The method includes: acquiring a first operation to trigger the multi-window manager; displaying the multi-window management interface corresponding to the multi-window manager in the touch-control display unit based on the first operation; displaying the at least one object identifier corresponding to the at least one application in the multi-window management interface, and displaying running status information corresponding to the at least one application. Using the technical solution of the present invention, the user is able to know the applications which may be displayed in a form of a small window and the current running status thereof conveniently and quickly by means of a multi-window management interface, and thereby the user experience is improved. | 04-30-2015 |
20150121230 | NETWORKED GAMING HEADSET WITH AUTOMATIC SOCIAL NETWORKING - In an audio setup comprising at least one audio headset configurable to process audio for a user (e.g., when participating in an online multiplayer game), input audio and/or output audio in the audio headset may be monitored, and when the audio matches triggering criteria, one or more update messages may be triggered via a social networking service. The triggering criteria may comprise (or be set based on) identity of the speaker, content of the audio, and/or conditions association with the audio. Different triggering criteria may be associated with different applications (e.g., different video games). The update messages may be made available to one or more other users, who may be selected based on matching particular user selection criteria and/or based on successful user validation. The user selection criteria may comprise participation in the same online multiplayer game. | 04-30-2015 |
20150128049 | ADVANCED USER INTERFACE - An advanced user interface includes a display device and a processing unit. The processing unit causes the display device to display a dynamic user interface containing a plurality of input areas in an adaptive graphical arrangement, detect user in puts on the dynamic user interface, and record the user inputs in a memory unit in association with a context of information inputted by the user. The graphical arrangement of input areas includes at least one primary input area, each of which is respectively associated with different information. The processing unit detects a user input for one of the input areas, compares the detected user input with prior user inputs recorded in the memory unit, and predicts a first next user input based on the comparison and the context of information associated with the detected user input. Based on the predicted first next user input, the processing unit dynamically modifies the displayed arrangement of the input areas so that information associated with the predicted first next user input is characterized and displayed as the at least one primary input area on the dynamic user interface. | 05-07-2015 |
20150135080 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - A mobile terminal including a wireless communication unit configured to provide wireless communication; a touch screen; and a controller configured to receive a plurality of taps applied to the touch screen, and display at least one function executable by the mobile terminal on the touch screen based the received plurality of taps and based on at least one of an operating state and an ambient environmental state of the mobile terminal. | 05-14-2015 |
20150143241 | WEBSITE NAVIGATION VIA A VOICE USER INTERFACE - A system and method are disclosed for navigation on the World Wide Web using voice commands. The name of a website may be called out by users several different ways. A user may speak the entire URL, a portion of the URL, or a name of the website which may bear little resemblance to the URL. The present technology uses rules and heuristics embodied in various software engines to determine the best candidate website based on the received voice command, and then navigates to that website. | 05-21-2015 |
20150143242 | MOBILE COMMUNICATION TERMINAL AND METHOD THEREOF - A method for providing a user interface of a communication apparatus comprises switching from a low power mode to a working mode upon receiving a stream of audio data; and upon switching from the low power mode to the working mode: extracting at least one audio feature from said stream of audio data, and modifying the appearance of at least one user interface component configured for invoking a function of the communication apparatus, in accordance with said extracted audio feature. | 05-21-2015 |
20150149907 | Portable Electronic Apparatus and Interface Display Method Thereof - A portable electronic apparatus and an interface display method thereof are disclosed. The method includes the following steps of: executing an application; capturing and analyzing an environmental sound around the portable electronic apparatus to obtain at least one sound character; | 05-28-2015 |
20150293746 | METHOD AND SYSTEM FOR DYNAMICALLY GENERATING DIFFERENT USER ENVIRONMENTS WITH SECONDARY DEVICES WITH DISPLAYS OF VARIOUS FORM FACTORS - Exemplary embodiments of methods and systems that dynamically generate different user environments from a handheld device for secondary devices with displays of various form factors are described. In one embodiment, a method includes generating a user environment for the handheld device; auto-detecting a configuration of the secondary device over an interface; generating at least a part of a different second user environment based on the configuration of the secondary device; transmitting the second user environment over the interface; and displaying at least a part of the second user environment on the second display. | 10-15-2015 |
20150301794 | IN-VEHICLE WEB PRESENTATION - One or more controller may extract voice commands from retrieved web content, format the web content according to vehicle computing system (VCS) specific formatting information, provide the formatted web content for display by the VCS, and update the recognized voice commands of the VCS according to the extracted voice commands. A server may identify whether a received web request for web content is directed to a vehicle sub-domain for providing an in-vehicle-specific version of the content, identify whether the received web request is for presentation of web content via a VCS, and redirect the web request to the vehicle sub-domain when the request is not directed to the vehicle sub-domain and is for presentation via the VCS. | 10-22-2015 |
20150301796 | SPEAKER VERIFICATION - A device includes a memory, a receiver, a processor, and a display. The memory is configured to store a speaker model. The receiver is configured to receive an input audio signal. The processor is configured to determine a first confidence level associated with a first portion of the input audio signal based on the speaker model. The processor is also configured to determine a second confidence level associated with a second portion of the input audio signal based on the speaker model. The display is configured to present a graphical user interface associated with the first confidence level or associated with the second confidence level. | 10-22-2015 |
20150301798 | BINARY-CACHING FOR XML DOCUMENTS WITH EMBEDDED EXECUTABLE CODE - A method, system and voice browser execute voice applications to perform a voice-based function. A document is retrieved and parsed to create a parse tree. Script code is created from the parse tree, thereby consuming part of the parse tree to create a reduced parse tree. The reduced parse tree is stored in a cache for subsequent execution to perform the voice-based function. | 10-22-2015 |
20150331665 | INFORMATION PROVISION METHOD USING VOICE RECOGNITION FUNCTION AND CONTROL METHOD FOR DEVICE - According to one embodiment, there is provided an information provision method in an information provision system connected to a display device having a display and a voice input apparatus capable of inputting a user's voice for providing information via the display device in response to the user's voice. The method includes transmitting display screen information for displaying a display screen including a plurality of selectable items on the display to the display device, receiving item selection information indicating selection of one of the plurality of items on the display screen, recognizing instruction substance if a voice instruction including first voice information representing the instruction substance is received from the voice input apparatus when the one item is selected, judging whether the voice instruction includes second voice information indicating a demonstrative term, and executing the instruction substance for the one item if a positive judgment is made. | 11-19-2015 |
20150339049 | INSTANTANEOUS SPEAKING OF CONTENT ON TOUCH DEVICES - Systems and processes are disclosed for initiating and controlling content speaking on touch-sensitive devices. A gesture can be detected on a touchscreen for causing text to be spoken. Displayed content can be analyzed, and a determination can be made based on size, position, and other attributes as to which portion of displayed text should be spoken. In response to detecting the gesture, the identified portion of text can be spoken using a text-to-speech process. A menu of controls can be displayed for controlling the speaking. The menu can automatically be hidden and a persistent virtual button can be displayed that can remain available on the touchscreen despite the user navigating to another view. Selecting the persistent virtual button can restore the full menu of controls, thereby allowing the user to continue to control the speaking even after navigating away from the content being spoken. | 11-26-2015 |
20150339098 | DISPLAY APPARATUS, REMOTE CONTROL APPARATUS, SYSTEM AND CONTROLLING METHOD THEREOF - A display apparatus includes a display which displays a plurality of items, a communicator which receives a pointing signal from a remote control apparatus, a recognizer which recognizes at least one of a voice command and a gesture, and a processor which selects one item among the plurality of items based on at least one of the pointing signal and the gesture, and in response to receiving the voice command regarding the selected one item, performs a control operation based on a keyword extracted to execute the voice command. | 11-26-2015 |
20150347086 | DIRECTING AUDIO OUTPUT BASED ON GESTURES - A method includes determining whether there is an incoming call to a first device or an outbound call from the first device. The method also includes monitoring at least one of: user input to the first device; motion of a second device; or motion of the first device. The method further includes: identifying a user's gesture based on a result of the monitoring and in response to determining that there is an incoming call or an outbound call; and redirecting audio input and output, of the first device, at a first one of input/output (I/O) devices to a second one of the I/O devices based on the gesture. The I/O devices comprise the first device and the second device. | 12-03-2015 |
20150363165 | Method For Quickly Starting Application Service, and Terminal - A method for quickly starting an application service, and a terminal. The method includes acquiring, by a terminal, event trigger information; starting, by the terminal, the application service software after determining that the event trigger information meets a preset quick startup condition; and acquiring, by the terminal, a voice instruction input by a user, and running the application service software according to the voice instruction. According to the method provided in the embodiments of the present disclosure, application service software is started by using event trigger information, so that a background of a terminal starts to perform recording only after the application service software is started, and background recording is stopped after the terminal provides an application service for a user, preventing a recording device in the background of the terminal from being always in a recording state, and further reducing power consumption of the terminal. | 12-17-2015 |
20150370454 | INDICATING AN OBJECT AT A REMOTE LOCATION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for identifying an object. In one aspect, a method includes receiving an image of a first location. The image depicts a layout of objects located at the first location and a visual code for each object. A user interface is generated for the first location using the image and the codes. The user interface depicts the objects and a user interface element for each visual code. Each user interface element is selectable to identify the object associated with the visual code. The user interface is provided for display at a second location. Selection data is received that specifies a selection of a particular user interface element. Command data is sent to a computer located at the first location, which causes the computer to highlight the object associated with the visual code of the selected user interface element. | 12-24-2015 |
20150370534 | MANAGING DEVICE, MANAGEMENT METHOD, RECORDING MEDIUM, AND PROGRAM - A managing device ( | 12-24-2015 |
20160011729 | ENHANCING PRESENTATION CONTENT DELIVERY ASSOCIATED WITH A PRESENATION EVENT | 01-14-2016 |
20160019553 | INFORMATION INTERACTION IN A SMART SERVICE PLATFORM - System and method for information service platform interaction are disclosed. The method may include obtaining, through a user interface of a mobile device, an input sequence from a user. The method may also include determining at least one business object based on the input sequence. The input sequence may at least partially match an identifier of the at least one business object. The method may also include obtaining user data based on the at least one business object or based on user identification information. The method may further include determining a menu of an information service platform provided by the at least one business object based on the user data. The method may further include displaying the menu according to a designated display mode on a display of the mobile device. | 01-21-2016 |
20160034253 | DEVICE AND METHOD FOR PERFORMING FUNCTIONS - Provided is a device including a display, an audio inputter, and a controller. The display displays at least one screen page of an application that is being executed. The audio inputter receives a voice command of a user. The controller performs an operation corresponding to the voice command by using screen page transition information for transition between application screen pages corresponding to the voice command, which is obtained from information about user interface (UI) elements included in the application screen pages of the application. Each of the UI elements performs a predetermined function when selected by the user. | 02-04-2016 |
20160034440 | APPARATUS FOR CONTROLLING MOBILE TERMINAL AND METHOD THEREFOR - An apparatus for controlling a mobile terminal allowing a user to easily and quickly select and transmit desired information and a method thereof are provided. The method for controlling a mobile terminal includes detecting information used in at least one application program, displaying a character input window on a display unit, and displaying information selected from the detected information in the character input window. | 02-04-2016 |
20160048372 | User Interaction With an Apparatus Using a Location Sensor and Microphone Signal(s) - An apparatus includes one or more location sensors configured to output one or more signals and one or more microphones configured to form corresponding microphone signals. The apparatus also includes one or more processors configured to cause the apparatus to perform at least the following: determination, using the one or more signals from the one or more location sensors, of a direction of at least one object relative to the apparatus; recognition, by the apparatus using a signal from a microphone in the apparatus, of one or more attributes of an acoustic signal made by the at least one object; and causation of an operation to be performed by the apparatus in response to the direction and the recognized one or more attributes being determined to correspond to the operation. Additional apparatus, methods, and program products are also disclosed. | 02-18-2016 |
20160050476 | WIRELESS COMMUNICATION BETWEEN ENDPOINT DEVICES - A user interface for a communication device having a wireless interface for connection to associated devices includes a graphical display screen integrated into the communication device, a user input device indicating selection and movement of graphical objects displayed on the graphical display screen, and a processor programmed to cause the graphical display screen to display a first arc representing the communication device itself, a first circle surrounding a visual representation of an audio output device associated with the communication device, and a connector between the first arc and the first circle. The connector includes two curved lines each beginning at the first arc and ending at the first circle, the lines curved towards each other between the first arc and the first circle. | 02-18-2016 |
20160070439 | ELECTRONIC COMMERCE USING AUGMENTED REALITY GLASSES AND A SMART WATCH - In an approach for electronic commerce using augmented reality glasses and a smart watch, a computer receives a configuration associating a user gesture to a command. The computer determines whether a user of the augmented reality glasses selects an object in a first electronic commerce environment and, responsive to determining the user selects an object, the computer determines whether the user performs a first gesture detectable by a smart watch. The computer, then, determines whether the first gesture matches the user gesture and, responsive to determining the first gesture matches the user gesture, the computer performs the associated command. | 03-10-2016 |
20160072915 | SYSTEM AND METHOD TO PROVIDE INTERACTIVE, USER-CUSTOMIZED CONTENT TO TOUCH-FREE TERMINALS - A method of displaying content to a user within a managed space comprised of one or more touch-free interactive kiosks includes collecting user data about the user. In addition, the plurality of touch-free interactive kiosks are configured to uniquely identify users located at the kiosk. Based on the identified user, and collected user data associated with the user, content is selected to be displayed to the user. | 03-10-2016 |
20160077793 | GESTURE SHORTCUTS FOR INVOCATION OF VOICE INPUT - Systems, methods, and computer storage media are provided for initiating a system-wide voice-to-text dictation service in response to a preconfigured gesture. Data input fields, independent of the application from which they are presented to a user, are configured to at least detect one or more input events. A gesture listener process, controlled by the system, is configured to detect a preconfigured gesture corresponding to a data input field. Detection of the preconfigured gesture generates an input event configured to invoke a voice-to-text session for the corresponding data input field. The preconfigured gesture can be configured such that any visible on-screen affordances (e.g., microphone button on a virtual keyboard) are omitted to maintain aesthetic purity and further provide system-wide access to the dictation service. As such, dictation services are generally available for any data input field across the entire operating system without the requirement of an on-screen affordance to initiate the service. | 03-17-2016 |
20160085505 | PROVIDING ITNERFACE CONTROLS BASED ON VOICE COMMANDS - Implementations provide user access to software functionality. In some implementations, a method includes selecting one or more portions of text. The method also includes employing the one or more portions to select software functionality. The method also includes presenting one or more user interface controls in combination with a representation of the text, where the one or more user interface controls includes a user selectable outline around one or more keywords in combination with a drop-down menu. | 03-24-2016 |
20160092104 | METHODS, SYSTEMS AND DEVICES FOR INTERACTING WITH A COMPUTING DEVICE - Example embodiments relate to processing user interactions with a computing device, comprising receiving a user-initiated action performed on a character button, the character button representing a character; determining whether the user-initiated action is performed in a normal an abnormal operating manner. When a normal operating manner is determined, displaying the character on a graphical display. When an abnormal operating manner is determined: identifying a previously entered character preceding the character, activating a microphone and receiving, by the microphone, a spoken word, searching a subset of a database for a textual form of the received spoken word, the subset based on one or more of the character and the previously entered character, and displaying a correct textual form of the spoken word on a graphical display by amending one or more of the character and the previously entered character when one or more of the character and the previously entered character is inconsistent with the textual form of the spoken word found in the searching. | 03-31-2016 |
20160124606 | DISPLAY APPARATUS, SYSTEM, AND CONTROLLING METHOD THEREOF - A display apparatus, a system, and controlling methods are provided. The display apparatus includes: a display configured to display a user interface (UI) screen; a receiver configured to receive a control signal from a remote control device; and a controller configured to move a cursor on the displayed UI screen based on the received control signal and, in response to the cursor being positioned in a preset area determined based on information about the UI screen, operate in a minute control mode. | 05-05-2016 |
20160132291 | INTENT DRIVEN COMMAND PROCESSING - A computing device receives a voice command to perform an action within a document. An interpretation of the voice command is mapped to a set of commands. Disambiguation is automatically performed by conducting a user experience to receive additional information. | 05-12-2016 |
20160139876 | METHODS AND APPARATUS FOR VOICE-CONTROLLED ACCESS AND DISPLAY OF ELECTRONIC CHARTS ONBOARD AN AIRCRAFT - A method for accessing electronic charts stored on an aircraft is provided. The method receives, via an onboard avionics system, location data for the aircraft; receives a set of speech data via a user interface of the aircraft; identifies one or more applicable electronic charts, based on the received location data and the received set of speech data, wherein the electronic charts stored on the aircraft comprise at least the one or more applicable electronic charts; and presents, via an aircraft display, a first one of the one or more applicable electronic charts. | 05-19-2016 |
20160139878 | VOICE INTERFACE FOR VIRTUAL AREA INTERACTION - Examples of systems and methods for voice-based navigation in one or more virtual areas that define respective persistent virtual communication contexts are described. These examples enable communicants to use voice commands to, for example, search for communication opportunities in the different virtual communication contexts, enter specific ones of the virtual communication contexts, and bring other communicants into specific ones of the virtual communication contexts. In this way, these examples allow communicants to exploit the communication opportunities that are available in virtual areas, even when hands-based or visual methods of interfacing with the virtual areas are not available. | 05-19-2016 |
20160140108 | MOBILE TERMINAL AND CONTROL METHOD THEREOF - A display device is disclosed. The display device comprises a display unit, a sound sensing unit receiving a user's voice, a database storing text displayed on the display unit for a predetermined time period, and a controller extracting from the database at least one text corresponding to a user's voice received within a predetermined time period. | 05-19-2016 |
20160142451 | ONLINE MEETING COMPUTER WITH IMPROVED NOISE MANAGEMENT LOGIC - In an embodiment, a method for calculating a noise index value for a digital audio source in a server computer system that is coupled to a plurality of audio sources and configured to operate a teleconference among the plurality of digital audio sources comprises receiving a first digital audio signal from the digital audio source of the one or more digital sources. Using the server computer system, the process identifies two or more types of sounds that are represented in the first digital audio source. The types of sounds identified include at least two of: one or more human voices; a background noise; or an actionable sound that mandates further action. Using a server computer system, calculating the noise index value based upon the types of sounds identified from the first digital audio signal. The noise index value represents a summation of relative magnitudes particular types of sounds that has been identified in the first digital audio signal in relation to other types of sounds that have been identified in the first digital audio signal. The process then visually presents the noise index value to one or more client computers of the one or more digital audio sources in a user interface screen display of an audio conference manager that the one or more client computers execute. | 05-19-2016 |
20160147388 | ELECTRONIC DEVICE FOR EXECUTING A PLURALITY OF APPLICATIONS AND METHOD FOR CONTROLLING THE ELECTRONIC DEVICE - An electronic device for executing a plurality of applications and a method for controlling the electronic device are provided. The method includes determining a first application related to an acquired user input from among the plurality of applications, and executing a task corresponding to the user input in the first application. | 05-26-2016 |
20160162150 | CELLPHONE MANAGER - Mobile phone application graphically shrinks a display in response to a flick down in right or left diagonal direction, thereby enabling easier operation with a single hand. Soft keys in a menu bar are provided at the bottom of the display conveniently to facilitate single hand use. | 06-09-2016 |
20160162259 | EXTERNAL VISUAL INTERACTIONS FOR SPEECH-BASED DEVICES - Examples are disclosed herein that are related to providing extended functionalities on-demand to an audio-based wearable device. One example provides a wearable computing device including an acoustic receiver configured to receive speech inputs, a speaker configured to present audio outputs, a communications subsystem configured to connect to an external device, a logic subsystem configured to execute instructions, and a storage subsystem having instructions executable by the logic subsystem to execute a program, connect to the external device via a wireless communications protocol, conduct an audio-based interaction of the program via the speech inputs received at the acoustic receiver and the audio outputs provided by the speaker, upon reaching a screen-based interaction of the program, notify a user via the speaker to interact with the external device, and provide image data to the external device for presentation via a screen of the external device. | 06-09-2016 |
20160162446 | ELECTRONIC DEVICE, METHOD AND STORAGE MEDIUM - According to one embodiment, an electronic device includes circuitry. The circuitry is configured to receive stroke data corresponding to strokes input in handwriting, receive voice data corresponding to voices of speakers, and display on a screen, in a first form, a first stroke group associated with a first period in the voice data in which a first speaker speaks, and display, in a second form, a second stroke group associated with a second period in the voice data in which a second speaker speaks, the second form different from the first form. | 06-09-2016 |
20160165038 | DIGITAL ASSISTANT ALARM SYSTEM - A digital assistant supported on devices such as smartphones, tablets, personal computers, game consoles, etc. exposes an updated and enhanced set of alarm functions to improve a device's user wake-up routines by applying automation rules to a variety of collected or sensed data and inputs in a context-aware manner in order to surface user experiences and content that are contextually meaningful and catered to the particular device user. The digital assistant can support an alarm system having network connectivity to other devices and external systems that enables the user to set an alarm and be awoken using a wide variety of stimuli such as sounds, voice, music, lights, and tactile sensations and then be given a summary of the upcoming day using verbal narration and graphical displays on the device. | 06-09-2016 |
20160179464 | SCALING DIGITAL PERSONAL ASSISTANT AGENTS ACROSS DEVICES | 06-23-2016 |
20160193502 | METHOD AND APPARATUS FOR PHYSICAL EXERCISE ASSISTANCE | 07-07-2016 |
20160196111 | INTERACTIVE VOICE RESPONSE INTERFACE FOR WEBPAGE NAVIGATION | 07-07-2016 |
20160202872 | IMAGE DISPLAY APPARATUS AND METHOD FOR OPERATING IMAGE DISPLAY APPARATUS | 07-14-2016 |
20160202951 | PORTABLE DIALOGUE ENGINE | 07-14-2016 |
20160253150 | Voice Controlled Marine Electronics Device | 09-01-2016 |
20160255188 | QUIET HOURS FOR NOTIFICATIONS | 09-01-2016 |
20160378194 | REMOTE CONTROL METHOD AND SYSTEM FOR VIRTUAL OPERATING INTERFACE - The present disclosure provides a remote control method for virtual operating interface including following steps. A voice command is obtained by the to-be-controlled terminal according to a voice signal input by a user, and the voice command is sent to a control terminal. A corresponding virtual operating interface is projected by the control terminal according to the voice command. A graphic image formed by the virtual operating interface is collected by the control terminal at each preset time interval. A remote command corresponding to a location within the virtual operating interface at which a fingertip of the user keeps operating is determined by the control terminal according to the collected graphic image, and the remote command is sent to the to-be-controlled terminal, thereby controlling the to-be-controlled terminal to perform corresponding remote operations. | 12-29-2016 |
20180024709 | SOCIAL MEDIA RADIO | 01-25-2018 |