Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Audio user interface

Subclass of:

715 - Data processing: presentation processing of document, operator interface processing, and screen saver display processing

715700000 - OPERATOR INTERFACE (E.G., GRAPHICAL USER INTERFACE)

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
715728000 Audio input for on-screen manipulation (e.g., voice controlled GUI) 163
715729000 For a visually challenged user 10
Entries
DocumentTitleDate
20080229206AUDIBLY ANNOUNCING USER INTERFACE ELEMENTS - Systems, apparatus, methods and computer program products are described below for using surround sound to audibly describe the user interface elements of a graphical user interface. The position of each audible description is based on the position user interface element in the graphical user interface. A method is provided that includes identifying one or more user interface elements that have a position within a display space. Each identified user interface element is described in surround sound, where the sound of each description is positioned based on the position of each respective user interface element relative to the display space.09-18-2008
20080263451Method for Driving Multiple Applications by a Common Diaglog Management System - The invention describes a method for driving multiple applications (A10-23-2008
20090013254Methods and Systems for Auditory Display of Menu Items - Various methods and systems are provided for auditory display of menu items. In one embodiment, a method includes detecting that a first item in an ordered listing of items is identified; and providing a first sound associated with the first item for auditory display, the first sound having a pitch corresponding to the location of the first item within the ordered listing of items.01-08-2009
20090113305Method and system for creating audio tours for an exhibition space - The present invention describes a system and method for planning and authoring an audio and/or video guided tour of an exhibition space, such as a museum or gallery. A mapmaking tool is provided whereby the user can graphically map the exhibition space and the location of the exhibits within that space. An authoring tool is also provided whereby the user can, with the resulting map of an exhibition space resulting from the mapmaking tool, record audio tours for each exhibit. If the location of an exhibit is changed, the associated audio and/or video tour remains associated with such exhibit.04-30-2009
20090144626ENABLING AND EXERCISING CONTROL OVER SELECTED SOUNDS ASSOCIATED WITH INCOMING COMMUNICATIONS - An online identity may selectively control perceptibility of incoming sounds associated with electronic messages between online identities (FIG. 06-04-2009
20090150785Input device for inputting voice information including voice recognizer - An input device for inputting a command to an electronic system such as an on-board navigation system includes a microphone for inputting voice, a voice recognizer for analyzing the inputted voice and comparing it with data stored in a voice recognition dictionary, a touch panel displaying keys corresponding to the inputted voice and a controller for controlling operation of the input device. User's voice inputted from the microphone is fed into the voice recognizer to calculate a degree of coincidence with the data in the voice recognition dictionary. The keys corresponding to the inputted voice having a high degree of coincidence are displayed on the touch panel in an enlarged size. The enlarging rates may be determined according to the degree of coincidence. The user is able to finalize, easily and quickly, the keys constituting a command by touching the panel because candidate keys are enlarged.06-11-2009
20090158158SYSTEM FOR SCHEDULING AND TRANSMITTING MESSAGES - A system for scheduling and transmitting messages is disclosed. The system stores a plurality of audio files in an audio database, generates a schedule of queued messages via the plurality of audio files, transmits the queued messages based on the schedule, and reconfigures the schedule based on a user interaction delivering the queued messages in accordance with the reconfigured schedule. A scheduled plurality of messages can be transmitted in a clear and professional manner. Additionally, “ad hoc” messages can be incorporated into the schedule without significantly disrupting the other messages.06-18-2009
20090164905MOBILE TERMINAL AND EQUALIZER CONTROLLING METHOD THEREOF - A mobile terminal including an output unit configured to output sound, an equalizer configured to adjust parameters of the sound output by the output unit, a display unit including a touch screen and configured to display a Graphic User Interface (GUI) including a graphical guide that can be touched and moved to adjust the parameters of the sound output by the output unit, and a controller configured to control the equalizer to adjust the parameters of the sound output by the output unit in accordance with a shape of the graphical guide that is touched and moved.06-25-2009
20090217167Information processing apparatus and method and program - Disclosed herein is an information processing apparatus for executing control such that one of image data and audio data is made subject to reproduction and the other data made subject to accompanying reproduction to reproduce both the subject to reproduction and the subject to accompanying reproduction, including a comparing section configured to unify a form of a feature of the subject to reproduction and a form of a feature of the subject to accompanying reproduction and make a comparison between these features; and a selecting section configured to select, on the basis of a result of comparison made by the comparing section, the subject to accompanying reproduction for the subject to reproduction from candidates of at least one subject to accompanying reproduction.08-27-2009
20090222731MIXING INPUT CHANNEL SIGNALS TO GENERATE OUTPUT CHANNEL SIGNALS - Techniques for mixing multiple input channel signals into multiple output channel signals are provided. A graphical user interface (GUI), which includes multiple indicators, is displayed. The input channel signals are mixed to produce multiple output channel signals. The mixing is performed based on the distance between the indicators' positions in the GUI. According to one embodiment of the invention, the mixing is also performed based on the angle formed between the indicators. Thus, the extent to which an input channel signal is carried by an output channel signal is, in one embodiment of the invention, a function of both the distance between the indicators and an angle formed by the indicators in the GUI.09-03-2009
20090241028COMPUTER SYSTEM FOR ADMINISTERING QUALITY OF LIFE QUESTIONNAIRES - The present invention includes in one embodiment, a computer survey system comprising a microprocessor, a display screen, a user input device and a database all in communication with the microprocessor. The microprocessor activates software that systematically administers a quality of life survey by displaying a plurality of quality of life questions on the display screen. The software requests responses from the participant using a user input device and stores the response in a database. The system further has a sound player that is configured to play a recording of the question displayed on the display screen upon activation of the sound player by the user input device.09-24-2009
20090249209CONTENT REPRODUCING APPARATUS AND CONTENT REPRODUCING METHOD - A content reproducing apparatus and a content reproducing method reproduce sound from image data or music data from the beginning, in a case where music data and image data is obtained from a content storage device and reproduced in succession. An input receiving unit receives a user's input, an external-output requesting unit requests a content storage device to externally output contents, and a reproducing unit reproduces the contents. The external-output requesting unit requests the content storage device to externally output the corresponding music data and the corresponding image data in analog format based on the input received by the input receiving unit, in order to cause the reproduction unit to reproduce music data and image data in succession. As a result, the content reproducing apparatus and method eliminate the necessity of switching reproduction of digital (or analog) information to the reproduction of analog (or digital) information, for reproducing the image (or music) data immediately after the music (or image) data.10-01-2009
20090254829USER INTERFACE WITH VISUAL PROGRESSION - In one embodiment, a graphics user interface is provided. The graphics user interface includes a plurality of graphical representations identifying separate audio data, respectively. Each of the plurality of graphical representations is configured in a list to be selected for playback of the respective audio data. A progression icon is displayed in each of the respective graphical representations. Each progression icon illustrates a temporal progression of the playback of the respective audio data.10-08-2009
20090259942VARYING AN AUDIO CHARACTERISTIC OF AN AUDIBLE NOTICE BASED UPON A PLACEMENT IN A WINDOW STACK OF THE APPLICATION INSTANCE ISSUING THE NOTICE - An application instance can be identified that is associated with an audible notice, which is to be presented. A placement of the application instance in a window stack can be determined. An audio characteristic (e.g., volume, pitch, speed, repetition, audio channel, etc.) of the audible notice can be adjusted based upon the determined placement in the window stack. Different placements in the windows stack can result in different adjustments. The adjusted audio notice can then be presented.10-15-2009
20090292993Graphical User Interface Having Sound Effects For Operating Control Elements and Dragging Objects - Systems and methods for providing an enhanced auditory behavior to a graphical user interface are described. Control elements portrayed by the graphical user interface on a display are associated with at least two states. When transitioning between states, a sound effect specified for that transition can be provided to provide further user or designer customization of the interface appearance. Movement of objects can be accompanied by a repeated sound effect. Characteristics of both sound effects can be easily adjusted in volume, pitch and frequency.11-26-2009
20090307594Adaptive User Interface - A method comprising: obtaining music information that defines at least one characteristic of audible music; and controlling changes to an appearance of a graphical user interface using the music information.12-10-2009
20090313547MULTI-MEDIA TOOL FOR CREATING AND TRANSMITTING ARTISTIC WORKS - A system for collaboratively producing artwork includes a server and a plurality of user computers coupled to a network. The server transmits a request for submissions to collaborators and who prepare portions of the artwork using multimedia tools provided by the server computer. The collaborators transmit their completed portions of the artwork back to the server computer where they are compiled and transmitted to a recipient on a designated time.12-17-2009
20100017719Conferencing system with low noise - A conferencing system with low noise.01-21-2010
20100023864USER INTERFACE TO AUTOMATICALLY CORRECT TIMING IN PLAYBACK FOR AUDIO RECORDINGS - Exemplary embodiments of methods to automatically correct timing of recorded audio in GUI are summarized here. One or more controls to adjust resolution of timing and degree of correction for the audio are displayed. The resolution of timing relates to heats on a grid and is affected by the degree of correction. The degree of correction is mapped to a time interval at each beat along the grid. Next, a user manipulation of one or more controls selecting a resolution and a degree of correction is received. Correction of timing is performed according to the selected resolution and degree of correction. Correcting of timing may include aligning a transient of the audio to the beat by compressing or stretching a portion of the audio. Compressing or stretching the portion of the audio depends on a length of the portion relative to a distance between adjacent beats.01-28-2010
20100070863 METHOD FOR READING A SCREEN - The present disclosure is directed to a method for reading a computer screen having a set of information and a button for submitting the set of information. The method may comprise collecting the set of information; determining a set of representative information, wherein the set of representative information is a subset of the set of information; concatenating the set of representative information to form a summarized context; associating the summarized context with the button; and producing audible sound reciting the summarized context when the button receives focus from a computer mouse.03-18-2010
20100083116INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING DEVICE IMPLEMENTING USER INTERFACE SUITABLE FOR USER OPERATION - A volume setting icon is provided with a slider for indicating a volume increasing from left toward right. A region in a direction of lower volume relative to a position corresponding to a current value, at which the slider is displayed, is identified as a volume low region, and a region in a direction of higher volume is identified as a volume high region. When the slider is selected, the slider can continuously be operated to move in any direction toward the volume low region and the volume high region. When the volume low region other than the slider is touched, the slider is instantaneously operated to move to a position corresponding to a touch position. When the volume high region other than the slider is touched, the slider is not instantaneously operated to move to a position corresponding to a touch position.04-01-2010
20100088603Medical patient device - A medical patient device having a medical measurement unit for detecting and processing analysis-specific signals, a computer unit, a user interface, a configuration data interface for receiving configuration files and having a memory unit, wherein the medical patient device is arranged and adapted so that by means of the user interface, a user can load configuration files into the patient device via the configuration data interface and store them in the memory unit, and can configure the user interface by accessing the downloaded configuration files stored in the memory unit.04-08-2010
20100088604INFORMATION STORAGE MEDIUM, COMPUTER TERMINAL, AND CHANGE METHOD - A computer terminal changes the non-evaluation property of one of the reference timings to the evaluation property based on a result of the evaluation conducted by comparing one of the reference timings having the evaluation property with the timing of the input performed by the operator.04-08-2010
20100095212METHODS AND APPARATUS FOR VISUALIZING A MEDIA LIBRARY - Visualizing and exploring a music library using metadata, such as genre, sub-genre, artist, and year, is provided. Geometric shapes, such as disks or rectangles, may be divided into sectors representing genre and each sector may be further divided into sub-sectors representing artists associated with each genre. The sector's relative size generally reflects the importance of the corresponding genre within the library. Likewise, the sub-sector's relative size generally reflects the importance of the corresponding artist within the genre which may be determined by the number of media items of the artist. Marks representing each media item may be arranged and displayed within the geometric shape to reflect the mark's corresponding genre, artist, and year. In addition, each mark may reflect an attribute, such as playcount, of the media item and each sector may reflect the mean value of an attribute of all media items within the sector.04-15-2010
20100100820USER SPECIFIC MUSIC IN VIRTUAL WORLDS - A computer-implemented method of providing user specific music for a virtual world environment can include, responsive to an input from a user, associating an event with a music source, wherein the event involves an avatar representing the user within a virtual world executing on a virtual world server and storing, within a client of the user, an association between the event and the music source. The client can monitor a virtual world session, within which the user is represented by the avatar, for the occurrence of the event, and responsive to the client detecting the event, outputting, from the client, audio played from the music source associated with the detected event, wherein the music source is played without involvement of the virtual world server.04-22-2010
20100115412IMAGE BROWSING APPARATUS AND IMAGE BROWSING METHOD - An image browsing apparatus has: a display unit for displaying image data; a reproducing unit for reproducing audio data; a detector for detecting a feature of the audio data reproduced by the reproducing unit; and a controller for, when predetermined audio data is reproduced by the reproducing unit, controlling an updating interval of the image data displayed to the display unit on the basis of the feature of the predetermined audio data detected by the detector.05-06-2010
20100122170SYSTEMS AND METHODS FOR INTERACTIVE READING - A method of interactive reading may include generating a user interface. The user interface may include audio data and video data received from a first user device and audio data and video received from a second user device. Additionally, a selection of a book from the first user device is received. A graphical representation of the book comprising text and illustrations from the book is displayed on the user interface, wherein the book is presented a single page at a time. An indication from the first user device or second user device to proceed to a second page of the book may be received the user interface may be updated to present the second page of the book. Also, the user interface may be transmitted to the first user device and the second user device. The method may also include a recording of the audio and video data sent by the user devices to be played back later.05-13-2010
20100131850METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR PROVIDING A CURSOR FOR INDICATING CONTEXT DATA IN A MAPPING APPLICATION - An apparatus, method and computer program product are provided for providing a cursor for indicating context data in a mapping application. An electronic device may display a map to a user via a mapping application having a cursor. A user may provide input selecting a type of context data to be represented by the cursor. The cursor may be positioned at a location on the map, and the electronic device may obtain context data based on the user input relating to the position and area proximate the position of the cursor on the map. The electronic device may then update a representation of the cursor using visual and other indicia to reflect the context data.05-27-2010
20100162119IMMERSIVE AUDIO COMMUNICATION - A method and system for using spatial audio in a virtual environment, which is capable of running on portable devices with limited processing power, and utilising low bandwidth communication connections. The system identifies a first avatar in the virtual environment; and determines if the avatar satisfies a reuse criterion, which will enable the system to reuse the audio information which has been generated for a second avatar in the virtual environment for the first avatar.06-24-2010
20100162120Digital Media Player User Interface - A user interface of a digital media player is disclosed. In one embodiment, a digital media player comprises a memory storing a plurality of channels of digital media files and a plurality of background animation files, wherein each channel is associated with a respective background animation file. The digital media player receives a user selection of a channel, displays a channel name of the selected channel, and plays the background animation file associated with the selected channel. In another embodiment, the digital media player receives a user selection of a channel, displays a channel name of the selected channel, and displays a visual representation of the plurality of channels, wherein a first indicia is used to represent the selected channel and a second indicia is used to represent the other channels.06-24-2010
20100162121DYNAMIC CUSTOMIZATION OF A VIRTUAL WORLD - A method and apparatus of dynamically customizing a virtual world. A first user and a second user engage in a conversation with respect to a location in the virtual world. A speech processor monitors the conversation and detects that a sound made matches a key sound. The virtual world is altered to include a virtual world customization based on the key sound. The virtual world customization may also be based on user information associated with the user in the conversation that made the sound.06-24-2010
20100162122Method and System for Playing a Sound Clip During a Teleconference - A system, method, and device for playing a sound clip during a teleconference. The method includes recording or otherwise obtaining a sound clip, storing the sound clip, associating the sound clip to a corresponding activation code, associating a description or identifier to each sound clip, selecting the sound clip to be replayed, and playing the sound clip during a teleconference, while the user is on mute, by entering a corresponding activation code into a user interface.06-24-2010
20100169781POSE TO DEVICE MAPPING - Embodiments may comprise logic such as hardware and/or code to map content of a device such as a mobile device, a laptop, a desktop, or a server, to a two dimensional field or table and map user poses or movements to the coordinates within the table to offer quick access to the content by a user. Many embodiments, for example, utilize three wireless peripherals such as a watch, ring, and headset connected to a mobile Internet device (MID) comprising an audible user interface and an auditory mapper to access to the content. The audible user interface may communicatively couple with the peripherals to receive pose data that describes the motion or movements associated with one or more of the peripherals and to provide feedback such as audible items and, in some embodiments, other feedback.07-01-2010
20100192066METHOD AND SYSTEM FOR A GRAPHICAL USER INTERFACE - A method of providing capabilities for searching a plurality of audio files by a graphical user interface in a display. A window showing information concerning at least one of a plurality of audio files is configured to operate in an always-on-top mode within the user interface, while occupying an area that is substantially less than 10% of the display's overall area. The window provides a search entry element configured to receive at least one search criterion provided by a user, the search criterion established by the user to identify at least one audio file from among the plurality of audio files. Additional unobtrusive windows are provided in the same user interface to present the results of search actions, these additional windows being initialized as hidden, and visible when a user initiates a search action or actively engages the search results.07-29-2010
20100205532Customizable music visualizer - Audio/music visualizers have become standard features in most music/video software applications available for music/video players. The music visualizer presents the user with a beautiful presentation of music coupled with visuals that are synchronized to the music to create a compelling experience. The presented music visualizer provides a new ability to create a synchronized and personalized music visualization experience by a user without the need for programming. There are no preset effects, rather the user interacts with the visualizer system through a User Interface to create a visualization design through the use of video effects available through the UI. Once the design has been completed the system will synchronize the user's customized visualization design with an input musical selection. In this manner, the user has created their own customized music/video visualization which may also be stored for later playback or modification.08-12-2010
20100229094AUDIO PREVIEW OF MUSIC - Systems, methods, and machine-readable media are disclosed for providing an audio preview of songs and other audio elements. In some embodiments, the electronic device can provide a user-controllable pointer scrollable through various categories, such as different genres or artists. Responsive to each movement of the pointer, the electronic device can select a song from the pointed-to category and can play a portion of the selected song. In other embodiments, music groups can be defined, where each music group includes songs that go well together. The electronic device can play, in succession, a portion of one song from each of the music groups. Responsive to a user selection of a playing portion, the electronic device can create a playlist based on the music group of the selected playing portion.09-09-2010
20100235747METHOD FOR ADJUSTING PARAMETERS OF AUDIO DEVICE - A method for adjusting parameters of an audio device is provided and applied to an adjusting system operated by a user to adjust the audio device. The method includes steps of: executing an application program with the adjusting system; the application program providing a graphical user interface for receiving data wherein the graphical user interface at least includes a plurality of options to be selected; and adjusting a plurality of parameters of the audio device associated with a first option when the first option of the plurality options is selected and outputting a sound by the audio device.09-16-2010
20100241963SYSTEM, METHOD, AND APPARATUS FOR GENERATING, CUSTOMIZING, DISTRIBUTING, AND PRESENTING AN INTERACTIVE AUDIO PUBLICATION - Systems, methods, and apparatuses for generating, customizing, distributing, and presenting an interactive audio publication to a user are provided. A plurality of text-based and/or speech-based content items is converted into voice-navigable interactive audio content items that include segmented audio data, embedded visual content, and accompanying metadata. An audio publication is generated by associating one or more audio content items with one or more audio publication sections, and generating metadata that defines the audio publication structure. Assembled audio publications may be used to generate one or more new custom audio publications for a user by utilizing one or more user-defined custom audio publication templates. Audio publications are delivered to a user for presentation on an enabled presentation system. The user is enabled to navigate and interact with the audio publication, using voice commands and/or a button interface, in a manner similar to browsing visually-oriented content.09-23-2010
20100293468AUDIO CONTROL BASED ON WINDOW SETTINGS - A method includes displaying a window associated with an application, providing audio content associated with the application, receiving a user input, determining whether the user input corresponds to a window manipulation of the window, where the window manipulation is other than a closing of the window, determining an audio setting that corresponds to a window setting associated with the window manipulation, when it is determined that the user input corresponds to the window manipulation, and outputting the audio content in correspondence to the audio setting.11-18-2010
20100306657Audio-Enhanced User Interface for Browsing - Embodiments of the present invention pertain to, but are not limited to, browsing a displayed listing of stored audio content such as music in a music player, PC, or portable electronic device, including MP3 players and mobile phones. Various embodiments of the present invention recognize that to improve the user's browsing experience, the user's selections from the listing should be accompanied by audio effects specifically configured to facilitate the corresponding content navigation. For example, the effects could be that as the currently selected item in the listing changes, an excerpt of the music track (or of a member of the group of music tracks) in the new item (i.e., “new currently-selected” item) in the listing is played with 3D audio effects such that the position from which the music track appears to be playing is in symphony with the new item's movement on the user interface.12-02-2010
20100332988MOBILE MEDIA DEVICE USER INTERFACE - A mobile media device user interface is described. In one or more implementations, output of a plurality of audio content is monitored by a mobile media device. Each of the audio content was received via a respective one of a plurality of broadcast channels by the mobile media device. A user interface is displayed on a display device of the mobile media device, the user interface describing each of the plurality of audio content and the respective broadcast channel from which the audio content was received.12-30-2010
20110010626Device and Method for Adjusting a Playback Control with a Finger Gesture - In some embodiments, a method is performed at an electronic device with a touch-sensitive surface while the device is providing content. The device detects a finger contact at a first location on the surface. The first location and an edge of the surface define a first distance. The finger contact at the first location corresponds to a start of a control adjustment gesture for setting an adjustable parameter for providing content. In response to detecting the start of the control adjustment gesture, the device maps a range of positions associated with the adjustable parameter to correspond to at least a portion of the first distance; detects movement of the finger contact in the control adjustment gesture; and modifies the adjustable parameter for providing content in accordance with the movement of the finger contact in the control adjustment gesture and the mapping of the range of positions.01-13-2011
20110010627SPATIAL USER INTERFACE FOR AUDIO SYSTEM - A system, apparatus, and method for generating a spatial user interface for an application, system or device. The user interface includes a means of representing user interface functions or commands as audio signals, with the audio signals being perceived by the user in a spatially different location depending on the function or command. A user input device is provided to enable a user to select a function or command, or to navigate through the spatial representations of the audio signals.01-13-2011
20110029875VEHICLE ALARM CUSTOMIZATION SYSTEMS AND METHODS - Vehicle alarm customization systems and methods are disclosed. An exemplary method includes a vehicle alarm customization system providing a user interface configured to facilitate end-user customization of a vehicle alarm, receiving an end-user selection of an audio content instance via the user interface, accessing data representative of the audio content instance, and customizing the vehicle alarm to sound at least part of the audio content instance in response to a vehicle alarm trigger event.02-03-2011
20110029876CLICKLESS NAVIGATION TOOLBAR FOR CLICKLESS TEXT-TO-SPEECH ENABLED BROWSER - A clickless, text-to-speech enabled browser includes a navigation toolbar having a plurality of button graphics and a web page region which allows for the display of web pages. Each button graphic including a predefined active region, an associated text message related to the command function of the button graphic, and an event handler that invokes text-to-speech software code for automatically speaking the associated text message and then executing the command function associated with the button graphic.02-03-2011
20110041061Obfuscating identity of a source entity affiliated with a communique directed to a receiving user and in accordance with conditional directive provided by the receiving user - A computationally implemented method includes, but is not limited to: receiving one or more conditional directives from a receiving user, the one or more conditional directives delineating one or more conditions for obfuscating identity of a source entity affiliated with one or more communiqués directed to the receiving user; and presenting at least a second communiqué in response to at least a reception of a first communiqué affiliated with the source entity and in accordance with the one or more conditional directives, the second communiqué being presented in lieu of presenting the first communiqué. In addition to the foregoing, other method aspects are described in the claims, drawings, and text forming a part of the present disclosure.02-17-2011
20110055703Spatial Apportioning of Audio in a Large Scale Multi-User, Multi-Touch System - A large scale multi-user, multi-touch system with a specialized zone-based user interface including methods for space management and spatial apportioning of audio cues. The system comprises a multi-touch display component fabricated in dimensions sufficient for at least a plurality of users and for displaying projected images and for receiving multi-touch input. The apparatus includes a plurality of image projectors, a plurality of cameras for sensing multi-touch input and the apparatus includes interface software for managing user space. The interface software implements techniques for managing multiple users using the same user interface component by allocating physical spaces within the multi-touch display component and coordinating movement of displayed objects between the physical spaces. Embodiments include a plurality of audio transducers and methods for performing audio spatialization using the plurality of audio transducers corresponding to the physical spaces, apportioning of volume levels to the audio transducers based on movement of a displayed object.03-03-2011
20110072350SYSTEMS AND METHODS FOR RECORDING AND SHARING AUDIO FILES - Systems for recording and sharing audio files among a plurality of users. The systems include a server that is configured to receive, index, and store a plurality of audio files, which are received by the server from a plurality of sources, within at least one database in communication with the server. In addition, the server is configured to make one or more of the audio files accessible to one or more persons—other than the original sources of such audio files. Still further, the server is configured to receive and publish comments associated with the audio files within a graphical user interface of a website. The comments may be submitted to the server through the website by persons other than the original sources of such audio files.03-24-2011
20110087965METHOD FOR SETTING UP A LIST OF AUDIO FILES FOR A MOBILE DEVICE - A method for setting up a list of audio files for a mobile device, and a mobile device utilizing the method for setting up the list of audio files are described.04-14-2011
20110113337INDIVIDUALIZED TAB AUDIO CONTROLS - According to one general aspect, a method may include detecting an audio signal configured to be played from a local loudspeaker. The method may also include determining which of one or more applications is included with the audio signal, wherein the determined application includes one or more tabs. The method may include determining which tab of the one or more tabs of the determined application is included with the audio signal. The method may comprise providing a graphical user interface (GUI) included with a handle of the determined tab, wherein the graphical user interface is configured to facilitate the manipulation of the audio signal by a user. The method may include manipulating the audio signal, based upon a command generated by the graphical user interface in response to a user interaction.05-12-2011
20110119589Navigable User Interface for Electronic Handset - An electronic device having a user interface on which selectable operation indicators are navigated wherein each operational indicator is associated with a corresponding application or other selectable item. The operational indicators are sequentially identified, in a specified order and for a specified time interval, in response to a first input at the user interface. Selection occurs in response to a second input at the user interface during the corresponding time interval. Such selection may, for example, launch an application or cause the display of a submenu or perform some other function05-19-2011
20110138284THREE-STATE TOUCH INPUT SYSTEM - A touch screen input device is provided which simulates a 3-state input device such as a mouse. One of these states is used to preview the effect of activating a graphical user interface element when the screen is touched. In this preview state touching a graphical user interface element on the screen with a finger or stylus does not cause the action associated with that element to be performed. Rather, when the screen is touched while in the preview state audio cues are provided to the user indicating what action would arise if the action associated with the touched element were to be performed.06-09-2011
20110138285PORTABLE VIRTUAL HUMAN-MACHINE INTERACTION DEVICE AND OPERATION METHOD THEREOF - A portable virtual input control device and a method thereof are provided, which can achieve a purpose of human-machine interaction and remote control. The portable virtual input control device comprises a main body, an operation interface display unit connected to the main body, an image capturing module adjacent to the main body, and a central processing module, built in the main body. The operation interface display unit is movably disposed in front of a head of the user for displaying an operation interface corresponding to a controlled device. The image capturing module captures a position image of a hand of the user outside the operation interface display unit. The central processing module transmits display data to the operation interface display unit to display the operation interface, and connected to the image capturing module for receiving the position image of the hand of the user captured by the image capturing module, and determining a control command input by the user according to the display data and the position image of the hand of the user.06-09-2011
20110154204Web-Enabled Conferencing and Meeting Implementations with a Subscription-Based Model - Meeting and conferencing systems and methods are implemented in a variety of manners. Consistent with an embodiment of the present disclosure, a meeting system is implemented that includes a computer server arrangement with at least one processor. The computer server arrangement is configured to provide a web-based meeting-group subscription option to potential meeting participants. A meeting scheduling data is received over a web-accessible virtual meeting interface. The meeting scheduling data includes group identification information and meeting time information. In response to the group identification information, participant identification information is retrieved for participants that subscribe to a meeting group identified by the group identification information. In response to the meeting time information and the participant identifying information, audio connections are established for participants of the meeting. Merged audio from the established audio connections is provided to the participants over the established audio connections.06-23-2011
20110167350Assist Features For Content Display Device - Systems, techniques, and methods are present for allowing a user to interact with the text in a touch-sensitive display in order to learn more information about the content of the text. Some examples can include presenting augmented text from an electronic book in a user-interface, the user-interface displayed in a touch screen; receiving touch screen input by the touch screen, the touch screen input corresponding to a portion of the augmented text; determining a command associated with the touch screen input from amongst multiple commands associated with the portion of the augmented text, each of the multiple commands being configured to invoke a function to present information regarding the portion of the augmented text; and presenting, based on the command associated with the received touch screen input, information corresponding to the identified portion of the augmented text.07-07-2011
20110173539ADAPTIVE AUDIO FEEDBACK SYSTEM AND METHOD - Various techniques for adaptively varying audio feedback data on an electronic device are provided. In one embodiment, an audio user interface implementing certain aspects of the present disclosure may devolve or evolve the verbosity of audio feedback in response to user interface events based at least partially upon the verbosity level of audio feedback provided during previous occurrences of the user interface event. In another embodiment, an audio user interface may be configured to vary the verbosity of audio feedback associated with a navigable list of items based at least partially upon the speed at which a user navigates the list. In a further embodiment, an audio user interface may be configured to vary audio feedback verbosity based upon the contextual importance of a user interface event. Electronic devices implementing the present techniques provide an improved user experience with regard to audio user interfaces.07-14-2011
20110185278METHODS FOR PROVIDING A PLAYLIST BY ACQUIRING RADIO DATA SYSTEM INFORMATION FROM MULTIPLE RADIO STATIONS - Wireless electronic devices with two frequency modulation (FM) tuners are provided. An electronic device may use a first FM tuner to tune to a current radio station and may use a second FM tuner to scan other radio stations to obtain a list of radio data system (RDS) information. The electronic device may use the list of RDS information to display a master playlist that includes an alternate song list reflecting songs that are currently playing on the other radio stations. A user of the electronic device may select a song from the alternate song list, may switch to a new radio station to listen to the selected alternate song, may tag the song for later purchase, or may return to the master playlist. The user may purchase the tagged songs at a later time through a media management service that can be launched directly on the electronic device.07-28-2011
20110185279METHODS AND SYSTEMS FOR REQUESTING AND DELIVERING MELODY MESSAGES - A method for requesting and creating a personalized message by selecting a song clip, entering a recipient's phone number, and recording a personalized message, which is then sent to the identified recipient at a scheduled date and time by phone. The personalized message can be received by a recipient who answers the phone, or can be recorded by the recipient's voicemail system.07-28-2011
20110225498PERSONALIZED AVATARS IN A VIRTUAL SOCIAL VENUE - A method is provided for creating a personalized social setting for sharing streaming media content. A virtual social venue comprising a virtual three-dimensional setting for sharing streaming media content is created. Users from one or more social networks are invited to participate in the virtual social venue. Avatars are provided to each user who enters the virtual social venue. The users' profile images are extracted from the social network(s) from which they were invited, and mapped onto the avatars of the corresponding users. Alternatively, users designate video feeds that are mapped onto their corresponding avatars. Streaming media content is then presented to the users in the three dimensional setting of the virtual social venue. In this setting, users are able to see avatar representations of other users, along with their mapped profile images or video feeds, while viewing and/or listening to the streaming media content.09-15-2011
20110271192MANAGING CONFERENCE SESSIONS VIA A CONFERENCE USER INTERFACE - Various embodiments of systems, methods, and computer programs are disclosed for managing conference sessions via a graphical user interface. One such method comprises: a conferencing system establishing a conference session between a plurality of participants accessing the conferencing system via a communication network; presenting a conference interface via a graphical user interface to a client device operated by one of the participants; displaying in the conference interface a participant object identifying each of the plurality of participants in the conference session; selecting one of the participant objects via the graphical user interface; moving the selected participant object via the graphical user interface to a drop target associated with a breakout session; removing the participant from the conference session; and adding the participant to the breakout session.11-03-2011
20110271193PLAYBACK APPARATUS, PLAYBACK METHOD AND PROGRAM - A playback apparatus is provided that includes an operation plane, a detection unit to detect which of contact relationship including first contact relationship, second contact relationship with a higher degree of contact than the first contact relationship and third contact relationship with a higher degree of contact than the second contact relationship the operation plane and an operation object have, a creation unit to create a list screen of content data for selecting content data based on movement of the operation object with respect to the operation plane in the first contact relationship, a playback unit to play back content data, and a playback control unit to cause the playback unit to play back content data selected on the list screen when becoming the second contact relationship from the first contact relationship if contact relationship becomes the third contact relationship from the second contact relationship.11-03-2011
20110276882AUTOMATIC GROUPING FOR USERS EXPERIENCING A SPECIFIC BROADCAST MEDIA - According to one aspect, embodiments of the invention provide a method for grouping chat users, the method comprising acts of receiving, via a first interface of a server, audio signals from a user over a communication network, receiving, via a second interface of the server, audio signals from a plurality of broadcast channels over the communication network; comparing, by a processor, the audio signals received from the user and the audio signals received from the plurality of broadcast channels, determining, by the processor, based on the act of comparing, that the audio signals from the user correspond to a program currently being broadcast on one of the plurality of broadcast channels, and grouping, by the processor, the user into a chat group based on at least one grouping criteria, the at least one grouping criteria including the program currently being broadcast.11-10-2011
20110276883Online Multiplayer Virtual Game and Virtual Social Environment Interaction Using Integrated Mobile Services Technologies - A system, method, and software allowing user interaction with virtual games/virtual social environments by using a mobile device and SMS/MMS messaging over a cellular telephone network. A communication device server sends to and accepts from a mobile aggregator, asynchronous SMS/MMS/text/email messages related to a virtual game running on a game server. The virtual game server integrates with a virtual social network utility such as Facebook®, and is accessible to the user through a dedicated web server. Multiple users may register with the social network utility to access the virtual game over the computer network (using the social network utility web server or a dedicated web server) or over the cellular telephone network (using the aggregator/communication device server). This system and method allows the user to interact with the virtual game/virtual social network while the user is offline.11-10-2011
20110307787SYSTEM AND METHOD FOR ACCESSING ONLINE CONTENT - An example method of accessing a web page includes receiving audio output from speakers of electronic equipment; detecting a cue in the received audio output; determining a web address based on the detected cue; and connecting to a web page using the web address.12-15-2011
20110314381NATURAL USER INPUT FOR DRIVING INTERACTIVE STORIES - A system and method are disclosed for combining interactive gaming aspects into a linear story. A user may interact with the linear story via a NUI system to alter the story and the images that are presented to the user. In an example, a user may alter the story by performing a predefined exploration gesture. This gesture brings the user into the 3-D world of the displayed image. In particular, the image displayed on the screen changes to create the impression that a user is stepping into the 3-D virtual world to allow a user to examine virtual objects from different perspectives or to peer around virtual objects.12-22-2011
20110320949Gesture Recognition Apparatus, Gesture Recognition Method and Program - There is provided a gesture recognition apparatus including a recognition unit for recognizing a gesture based on a set of gesture information input in a given input period, a prediction unit for predicting the gesture from halfway input gesture information among the set of gesture information, and a notification unit for notifying a user of prediction information about the result of predicting the gesture. A user can confirm what kind of gesture is recognized by continuing to input gesture information through a notification of the prediction information.12-29-2011
20120023406AUDIO MIXING CONSOLE - Mixer includes first and second displays each capable of displaying a pop-up screen that simultaneously displays pieces of information of eight channels. Once a display instruction for displaying a pop-up screen is received, it is ascertained whether or not the pop-up screen currently instructed to be displayed can be displayed on both of the first and second displays. If the pop-up screen can be displayed on only one of the displays, the currently instructed pop-up screen and one-screen channel selection switch are displayed on the one display together with. If, on the other hand, the pop-up screen can be displayed on both of the displays, the instructed pop-up screen are displayed on in a two-screen format on individual ones of the displays, and a two-screen channel selection switch is displayed on each one of the pop-up screens.01-26-2012
20120036437Method, Devices, and System for Delayed Usage of Identified Content02-09-2012
20120066600MULTIMODAL USER NOTIFICATION SYSTEM TO ASSIST IN DATA CAPTURE - A system for executing a multimodal software application includes a mobile computer device with a plurality of input interface components, the multimodal software application, and a dialog engine in operative communication with the multimodal software application. The multimodal software application is configured to receive first data from the plurality of input interface components. The dialog engine executes a workflow description from the multimodal software application by providing prompts to an output interface component. Each of these prompts includes notification indicating which of the input interface components are valid receivers for that respective prompt. Furthermore, the notification may indicate the current prompt and at least the next prompt in sequence.03-15-2012
20120131462HANDHELD DEVICE AND USER INTERFACE CREATING METHOD - A handheld device stores mapping relationships between a plurality of user sound types and a plurality of user situations. The handheld device detects a user sound signal from surrounds of the handheld device, and analyzes the user sound signal to obtain a corresponding user sound type. The handheld device determines a corresponding user situation according to the corresponding user sound type and the mapping relationships between the plurality of user sound types and the plurality of user situations, and creates a user interface corresponding to the determined user situation.05-24-2012
20120151348Using Cinematographic Techniques for Conveying and Interacting with Plan Sagas - The subject disclosure is directed towards obtaining a linear narrative synthesized from a set of objects, such as objects corresponding to a plan, and using cinematographic and other effects to convey additional information with that linear narrative when presented to a user. A user interacts with data from which the linear narrative is synthesized, such as to add transition effects between objects, change the lighting, focus, size (zoom), pan and so forth to emphasize or de-emphasize an object, and/or to highlight a relationship between objects. A user instruction may correspond to a theme (e.g., style or mood), with the effects, possibly including audio, selected based upon that theme.06-14-2012
20120151349APPARATUS AND METHOD OF MAN-MACHINE INTERFACE FOR INVISIBLE USER - Man-machine interface apparatus and method for an user are provided. The man-machine interface apparatus includes: a touch recognizing unit recognizing a touch by the invisible user; and a voice notifying unit notifying the invisible user of a name of a menu or application service corresponding to the touched position through a voice.06-14-2012
20120198339Audio-Based Application Architecture - An application architecture comprises one or more audio interfaces placed within the premises of users. A cloud-based application engine receives audio information from the interfaces and provides information to cloud-based applications based on the audio within the user premises. The other applications utilize the information to provide or enhance services to the users.08-02-2012
20120198340SINGLE ACTION AUDIO INTERFACE UTILISING BINARY STATE TIME DOMAIN MULTIPLE SELECTION PROTOCOL - An interface protocol for the functional manipulation of complex devices such as consumer electronic devices without the necessity of the visual feedback via textual or graphic data, wherein the sensor functions change with time rather than placement, so that a user action biases a binary state switch, which is correlated to a timed audible audio data stream, the correlation indicating the desired action selected by the user.08-02-2012
20120204110SYSTEM AND METHOD FOR AN IN-SYSTEM EMAIL INTERFACE - The embodiments describe an in-vehicle text email method and system. The in-vehicle text email system provides email to a driver in a format that is suitable for output in a vehicle. The format combines audio as well as visual output mechanisms to deliver an email to the driver in a way that minimizes the level of attention needed to digest an email while operating the vehicle.08-09-2012
20120210233Smartphone-Based Methods and Systems - Methods and arrangements involving portable devices, such as smartphones and tablet computers, are disclosed. One arrangement enables a creator of content to select software with which that creator's content should be rendered—assuring continuity between artistic intention and delivery. Another arrangement utilizes the camera of a smartphone to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some of the detailed technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography. A great variety of other features and arrangements are also detailed.08-16-2012
20120260176GESTURE-ACTIVATED INPUT USING AUDIO RECOGNITION - In one example, a method includes, displaying, at a presence-sensitive screen of a computing device, an input field in a region of a graphical user interface (GUI). The method further includes receiving, at the presence-sensitive screen, user input including one or more gestures to select the input field, wherein the one or more gestures to select the input field include motion at a location of the presence-sensitive screen that corresponds to the region of the GUI displaying the input field. The method also includes, while the input field is selected, detecting, by the computing device, an audio signal and identifying, by the computing device, at least one input value based on the detected audio signal. The method also includes assigning, by the computing device, the at least one input value to the input field in the GUI.10-11-2012
20120260177GESTURE-ACTIVATED INPUT USING AUDIO RECOGNITION - In one example, a method includes, displaying, at a presence-sensitive screen of a computing device, an input field in a region of a graphical user interface (GUI). The method further includes receiving, at the presence-sensitive screen, user input including one or more gestures to select the input field, wherein the one or more gestures to select the input field include motion at a location of the presence-sensitive screen that corresponds to the region of the GUI displaying the input field. The method also includes, while the input field is selected, detecting, by the computing device, an audio signal and identifying, by the computing device, at least one input value based on the detected audio signal. The method also includes assigning, by the computing device, the at least one input value to the input field in the GUI.10-11-2012
20120266071AUDIO CONTROL OF MULTIMEDIA OBJECTS - In some examples, aspects of the present disclosure may include techniques for audio control of one or more multimedia objects. In one example, a method includes receiving an electronic document that includes a group of one or more multimedia objects capable of generating audio data. The method also includes registering a multimedia object of the group of one or more multimedia objects, wherein registering the multimedia object comprises storing a multimedia object identifier that identifies the multimedia object. The method further includes receiving audio data; and determining, by a computing device, a volume level of the audio data generated by the registered multimedia object based on one or more configuration parameters, wherein the one or more configuration parameters define one or more volume levels associated with the multimedia object identifier. The method also includes outputting, to an output device, the audio data at the determined volume level.10-18-2012
20120324355SYNCHRONIZED READING IN A WEB-BASED READING SYSTEM - A system and method is presented for the creating a synchronized reading session in a web based reading environment. Data is maintained in a database relating to books, chapters, pages. Through a speaker interface, one user operating a speaker computer controls the pages that appear on the interfaces of all participants in the synchronized reading session. The speaker interface also accepts an audio input that is also shared with the interfaces of all participants in the session. All participants view the same pages and hear the same audio in synchronization. Temporary audio input abilities may be granted to a particular participant computers in order to allow a participant to ask a question. Temporary page control can also be granted to a particular participant computer. The speaker computer retains the ability to revoke temporary control granted to participant computers.12-20-2012
20120331387METHOD AND SYSTEM FOR PROVIDING GATHERING EXPERIENCE - The present disclosure relates to the use of gestures and feedback to facilitate gathering experiences and/or applause events with natural, social ambience. For example, audio feedback responsive to participant action may swell and diminish in response to intensity and social aspects of participant participation. Each participant can have unique sounds or other feedback assigned to represent their actions to create a social ambience.12-27-2012
20130047087RELATED INFORMATION SUCCESSIVELY OUTPUTTING METHOD, RELATED INFORMATION SUCCESSIVELY PROVIDING METHOD, RELATED INFORMATION SUCCESSIVELY OUTPUTTING APPARATUS, RELATED INFORMATION SUCCESSIVELY PROVIDING APPARATUS, RELATED INFORMATION SUCCESSIVELY OUTPUTTING PROGRAM AND RELATED INFORMATION SUCCESSIVELY PROVIDING PROGRAM - Related information successively outputting and providing methods and apparatus are disclosed wherein the processing load to a related information successively providing apparatus upon provision of content related information can be reduced significantly. The outputting apparatus selects content identification information set as a successive output object in a list included in page information acquired from the providing apparatus as noticed content information, and acquires content related information coordinated with the selected content identification from the providing apparatus. The outputting apparatus outputs the acquired content related information and detects an end of the outputting. When an end of the outputting is detected, the selection of noticed content information, acquisition and outputting of related information and detection of an end of outputting are successively executed again. As a result, content related information acquired from the providing apparatus are outputted automatically and successively.02-21-2013
20130097510AUDIO ADJUSTMENT SYSTEM - An audio adjustment system is provided that can output a user interface customized by the provider of the audio system instead of the electronic device manufacturer. Such an arrangement can save both field engineers and manufacturers a significant amount of time. Advantageously, in certain embodiments, such an audio adjustment system can be provided without knowledge of the electronic device's firmware. Instead, the audio adjustment system can communicate with the electronic device through an existing audio interface in the electronic device to enable a user to control audio enhancement parameters in the electronic device. For instance, the audio adjustment system can control the electronic device via an audio input jack on the electronic device.04-18-2013
20130097511Positioning a Virtual Sound Capturing Device in a Three Dimensional Interface - A method, system, and computer-readable product for positioning a virtual sound capturing device in a graphical user interface (GUI) are disclosed. The method includes displaying a virtual sound capturing device in relation to a virtual sound producing device in a three dimensional interface and in a two dimensional graphical map. Additionally, the method includes adjusting the display of the virtual sound capturing device in relation to the virtual sound producing device in both the three dimensional interface and the two dimensional graphical map in response to commands received from an input device.04-18-2013
20130111348Prioritizing Selection Criteria by Automated Assistant05-02-2013
20130139061DESKTOP SOUND SOURCE DISCOVERY - Multiple applications may execute on a computing device and the computing system may monitor the multiple applications, identify a set e applications generating sound and determine whether at least one sound related criterion is satisfied. If at least one sound related criterion is satisfied, the computing system displays sound indicators for the set of applications generating sound.05-30-2013
20130139062Audio Indicator of Position Within a User Interface - A mobile communication device and method for controlling a user interface of the mobile communication device are disclosed. The method includes generating a multi-page graphical user interface that enables a user of the mobile communication device to control operations of the mobile communication device, displaying a current page of the multi-page user interface, changing the current page of the multi-page user interface that is displayed in response to a user action, and projecting an audible sound for each page of the multi-page user interface that is displayed.05-30-2013
20130145271CALENDAR INTERFACE FOR DIGITAL COMMUNICATIONS - Information from communications is displayed in a calendar format. Text from the communications is used to determine whether a scheduling entry should be created. If so, text from the communication is used to create a proposed calendar or to-do list entry, which can be saved, modified or canceled by the user. Information from a call log can be filtered and displayed in a calendar format.06-06-2013
20130159861Adaptive Audio Feedback System and Method - Various techniques for adaptively varying audio feedback data on an electronic device are provided. In one embodiment, an audio user interface implementing certain aspects of the present disclosure may devolve or evolve the verbosity of audio feedback in response to user interface events based at least partially upon the verbosity level of audio feedback provided during previous occurrences of the user interface event. In another embodiment, an audio user interface may be configured to vary the verbosity of audio feedback associated with a navigable list of items based at least partially upon the speed at which a user navigates the list. In a further embodiment, an audio user interface may be configured to vary audio feedback verbosity based upon the contextual importance of a user interface event. Electronic devices implementing the present techniques provide an improved user experience with regard to audio user interfaces.06-20-2013
20130185639TERMINAL HAVING PLURAL AUDIO SIGNAL OUTPUT PORTS AND AUDIO SIGNAL OUTPUT METHOD THEREOF - A terminal having plural audio signal output ports and an audio signal output method thereof are provided. The audio signal output method of a terminal having at least two audio signal output ports includes setting allocation information on which audio signal source is allocated to each of the audio signal output ports according to a user input, extracting the set allocation information, and outputting an audio signal of an audio signal source through a corresponding audio signal output port according to the extracted allocation information. An audio signal output apparatus capable of variously using an audio signal output port according to the need of a user, and a method thereof may be provided.07-18-2013
20130191753Balancing Loudspeakers for Multiple Display Users - A method consistent with the present invention involves displaying a window on a computer monitor; at one or more programmed processors, determining a position of the window on the computer monitor; at the one or more programmed processors, deducing a user position for a user of the window with the window based on the position of the window on the computer monitor; and steering audio signals from an application running in the window to a loudspeaker in an array of loudspeakers, where the loudspeaker is a loudspeaker in the array of loudspeakers closer to the deduced user position than another loudspeaker in the array. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.07-25-2013
20130198635Managing Multiple Participants at the Same Location in an Online Conference - Various embodiments of systems, methods, and computer programs are disclosed for managing multiple participants at the same location in an online conference. One embodiment is a method for providing an online conference comprising: a conferencing system establishing an audio conference with a plurality of client devices via a communication network, each client device associated with a first participant; the conferencing system determining at least one second participant co-located with one of the first participants and the corresponding client device; and the conferencing system presenting, to each of the client devices, the audio conference and a conference user interface, the conference user interface displaying a participant object identifying each of the first and second participants.08-01-2013
20130212478AUDIO NAVIGATION OF AN ELECTRONIC INTERFACE - Embodiments of methods, apparatuses, devices and/or systems for navigating electronic interfaces via an audio signal comprising commands are disclosed.08-15-2013
20130227417SYSTEMS AND METHODS FOR PROMPTING USER SPEECH IN MULTIMODAL DEVICES - A method for prompting user input for a multimodal interface including the steps of providing a multimodal interface to a user, where the interface includes a visual interface having a plurality of input regions, each having at least one input field; selecting an input region and processing a multi-token speech input provided by the user, where the processed speech input includes at least one value for at least one input field of the selected input region; and storing at least one value in at least one input field.08-29-2013
20130238999SYSTEM AND METHOD FOR MUSIC COLLABORATION - Techniques are provided for enabling a collaborative music session between multiple participants. In certain embodiments, a user may create a jam session using his/her electronic device. One or more other participants may then join the jam session using their electronic devices. The jam session participants may then jam together using their electronic devices as virtual music instruments. A musical memento of the jam session can then be stored for subsequent playback.09-12-2013
20130263004APPARATUS AND METHOD OF GENERATING A SOUND EFFECT IN A PORTABLE TERMINAL - An apparatus and method of a portable terminal outputting a sound effect are provided. An operation method of the portable terminal includes sensing an input, identifying a handwriting tool used for the input and a handwriting face displayed, and outputting a sound that mimics an actual handwriting operation of the portable terminal.10-03-2013
20130263005Adaptation of Gaming Applications to Participants - Methods, systems, and products adapt gaming applications to participants. Should one of the participants be a minor, for example, the gaming application may adapt to scenarios that are appropriate for minors. Similarly, the gaming application may adapt to customs associated with a country of a participant.10-03-2013
20130283164System for Controlling Association of Microphone and Speakers - A method includes steps determining individual ones of speakers and microphones connected to a first computerized appliance by execution of a software routine, determining individual ones of computer applications executable on the first or a second computerized appliance, and capable of audio input and output, and associating the speakers and microphones with the computer applications such that audio output from individual applications is provided to associated speakers only and audio input from individual microphones is provided to associated applications only.10-24-2013
20130283165METHOD AND USER EQUIPMENT FOR UNLOCKING SCREEN SAVER - Embodiments of the present invention provide a method and user equipment for unlocking a screen saver, which can implement personalized operations of screen saver unlocking. The method includes: detecting a position of a first input on a screen; detecting a duration of the first input when the position of the first input falls into a user-preset track; and unlocking the screen saver when the duration exceeds a time threshold. The corresponding user equipment includes a position detecting module, a time detecting module, and a screen. The above technical solutions may implement personalized operations of screen saver unlocking and increase fun by detecting whether the position of a user input falls into a user-preset track and detecting the duration of the user input.10-24-2013
20130283166VOICE-BASED VIRTUAL AREA NAVIGATION - Examples of systems and methods for voice-based navigation in one or more virtual areas that define respective persistent virtual communication contexts are described. These examples enable communicants to use voice commands to, for example, search for communication opportunities in the different virtual communication contexts, enter specific ones of the virtual communication contexts, and bring other communicants into specific ones of the virtual communication contexts. In this way, these examples allow communicants to exploit the communication opportunities that are available in virtual areas, even when hands-based or visual methods of interfacing with the virtual areas are not available.10-24-2013
20130298027VOICE OUTPUT DEVICE, INFORMATION INPUT DEVICE, FILE SELECTION DEVICE, TELEPHONE SET, AND PROGRAM AND RECORDING MEDIUM OF THE SAME - A device, computer program and method for outputting linguistic information. The voice output device, for example, includes an output information acquisition unit acquiring linguistic information and attribute information. Attribute information includes an attribute added to each linguistic element included in the linguistic information. A tactile pattern storage unit stores a predetermined tactile pattern corresponding to each linguistic element. A tactile pattern acquisition unit acquires the tactile pattern from the tactile pattern storage unit. A voice output unit reads aloud the linguistic elements and a tactile pattern output unit outputs, in parallel with reading aloud each linguistic element, the tactile pattern corresponding to the attribute added to the linguistic element, thereby allowing a user to sense the tactile pattern by the sense of touch.11-07-2013
20130298028MULTIFUNCTIONAL INPUT DEVICE - A system and method for interacting with a graphical user interface (GUI) on a display means of a computing device. The system includes a multifunctional input device configured to interact with a simplified graphical user interface (GUI) of the computing device. The multifunctional input device is configured to provide improved accessibility to the computing device for users having limited computing knowledge and/or skills, as well as limited physical and/or cognitive abilities.11-07-2013
20130332837METHODS AND APPARATUS FOR SOUND MANAGEMENT - Systems and techniques for location based sound management. Information associated with a display is defined or selected based on user inputs and sounds associated with responsive locations of the display. Defining of the information associated with the display may be based at least in part on user touches to a touch screen display. Parameters for playback are also selected, such as a number of audio channels, based on user inputs that may similarly comprise user touches to the touch screen display. Playback parameters may be changed during playback based on user inputs occurring during playback.12-12-2013
20140006953METHOD FOR OPERATING A PORTABLE TERMINAL01-02-2014
20140013231DISPLAYING IMAGES FOR PEOPLE ASSOCIATED WITH A MESSAGE ITEM - Technologies are described herein for displaying a list of people associated with a message item along with images and other personal context information in a PIM application. The people associated with the message item are identified and a list is generated containing a name, an image, and other personal context information for each. The list of people associated with the message item is displayed in a window of the PIM along with the information regarding the message item.01-09-2014
20140026055Accessible Reading Mode Techniques For Electronic Devices - Techniques are disclosed for providing accessible reading modes in electronic computing devices. The user can transition between a manual reading mode and an automatic reading mode using a transition gesture. The manual reading mode may allow the user to navigate through content, share content with others, aurally sample and select content, adjust the reading rate, font, volume, or configure other reading and/or device settings. The automatic reading mode facilitates an electronic device reading automatically and continuously from a predetermined point with a selected voice font, volume, and rate, and only responds to a limited number of command gestures that may include scrolling to the next or previous sentence, paragraph, page, chapter, section or other content boundary. For each reading mode, earcons may guide the selection and/or navigation techniques, indicate content boundaries, confirm user actions or selections, or to otherwise provide an intuitive and accessible user experience.01-23-2014
20140033044PERSONALIZED 3D AVATARS IN A VIRTUAL SOCIAL VENUE - A social media platform is provided for interacting in a three-dimensional platform. The platform leverages a social graph from a user's social network to enable the user to invite social networking friends to interact within a three-dimensional virtual social venue. The platform also provides avatars to the user and friends who enter the virtual social venue. The platform also leverages the social graph to import user profile pictures from the social network, superimpose them onto the avatars, and display the avatars with the superimposed profile pictures within the context of the virtual three-dimensional space. Alternatively, the platform imports streaming images of the users from cameras, superimpose them onto the avatars, and display the avatars with their superimposed streaming images within the context of the virtual three-dimensional space.01-30-2014
20140059437Interactivity With A Mixed Reality - Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual.02-27-2014
20140068440POP OUT MUSIC CONTROL PANE IN BROWSER - Methods, systems and computer programs are presented for generating media tabs for playing media files of various websites or applications. One method includes detecting a selected website through a browser and scanning the selected website to identify media files. For identified media files in the selected website, the method creates a media tab for association with the browser. The method generates a unified set of media controls for the media tab, where the unified set of media controls is mapped to native controls of the selected website having the media files. The method provides tab rendering data for the media tab. The tab rendering data is configured for associating the media tab with the browser. The tab rendering data, when associated with the browser, enables input at the media tab to be communicated to selected ones of the native controls, without accessing the native controls at the selected website.03-06-2014
20140068441TYPETELL TOUCH SCREEN KEYSTROKE ANNOUNCER - A method for validating a touch made on a touch-sensitive screen such as, for example, making a keystroke on a screen keyboard on a computing device with a touch-sensitive screen (such as a smart phone or a tablet computer). The user intends to make an intended single touch on a desired zone on the screen, and then touches an actually-touched zone on the screen. The actual touch may or may not be on the zone desired by the user. The computing device initiates, in real time, an audible response corresponding to the actual touch (such as a real time announcement of the keystroke actually made). The user can thus validate that as she types, each keystroke she makes is accurate, one by one, “on the fly”. Software embodying the method may be available in any form, such as embedded in an operating system or downloadable as a separate app.03-06-2014
20140082500Natural Language and User Interface Controls - Natural language and user interface control techniques are described. In one or more implementations, a natural language input is received that is indicative of an operation to be performed by one or more modules of a computing device. Responsive to determining that the operation is associated with a degree to which the operation is performable, a user interface control is output that is manipulable by a user to control the degree to which the operation is to be performed.03-20-2014
20140089805MOBILE TERMINAL AND CONTROLLING METHOD THEREOF - A mobile terminal and controlling method thereof are disclosed, which facilitates a terminal to be used in further consideration of user's convenience. The present invention includes saving a first memo sheet including at least one memo object and at least one audio memo object for the first memo sheet, displaying the first memo sheet to be displayed on a touchscreen, and when a prescribed memo object is selected from the at least one memo object displayed on the first memo sheet, controlling an audio memo object corresponding to the selected memo object to be outputted via an audio output unit. Accordingly, a voice memo content and other memo contents can be efficiently recorded and read.03-27-2014
20140096003Vehicle Audio System Interface - A vehicle audio system interface is provided, as well as a method of using same, in which a visual representation of the vehicle's passenger cabin is displayed on the vehicle's touch-screen. Also displayed on the touch-screen is a touch sensitive balance slide controller and a touch sensitive fade slide controller. As the user makes left-right balance selections on the balance controller, and front-rear fader selections on the fade controller, an acoustic sweet spot designator is presented on the displayed representation of the passenger cabin. The acoustic sweet spot designator, which corresponds to the pre-determined acoustic sweet spot, is based on the combination of the current left-right balance and front-rear fader settings.04-03-2014
20140108934IMAGE DISPLAY APPARATUS AND METHOD FOR OPERATING THE SAME - An image display apparatus and a method for operating the same are disclosed. The image display apparatus operating method includes receiving a touch input or a gesture input in a first direction, outputting a first sound corresponding to the first direction, receiving a touch input or a gesture input in a second direction, and outputting a second sound corresponding to the second direction. Therefore, it may be possible to improve user convenience.04-17-2014
20140115479Audio Management Method and Apparatus - An audio management method and apparatus, which relate to the field of communications technologies. The method includes: when it is detected that there is a first web page audio to be automatically played, determining whether there is an audio being played and whether a priority of the first web page audio is lower than a priority of the audio being played; when yes, intercepting automatic play of the first web page audio; otherwise, playing the first web page audio. A priority of a first web page audio to be automatically played and a priority of a web page audio being played are compared to decide whether to play the first web page audio, so that the problem of a conflict between web page audios is solved, and a user is not required to perform operations one by one.04-24-2014
20140149868METHOD AND SYSTEM FOR PROVIDING AUDIO ASSISTANCE FOR NON-VISUAL NAVIGATION OF DATA - A method and system for providing audio assistance for non-visual navigation of data. The method includes receiving a scroll input; determining a page length associated with the data; associating the page length with a plurality of steps; associating an audio pattern to each step; determining a step, of the plurality of steps, that corresponds to the scroll input; and playing, to the user, the audio pattern corresponding to the step. The system includes an electronic device, a communication interface, a memory and a processor to receive a scroll input, to determine a page length associated with the data, to associate the page length with a plurality of steps, to associate an audio pattern to each step, to determine a step, of the plurality of steps, that corresponds to the scroll input and to play, to the user, the audio pattern corresponding to the step.05-29-2014
20140149869AUTOMATIC RATING SYSTEM USING BACKGROUND AUDIO CUES - Methods and systems for capturing, transmitting and processing data for generating ratings relating to multimedia programming based on passively obtained user cues are disclosed herein.05-29-2014
20140157128SYSTEMS AND METHODS FOR PROCESSING SIMULTANEOUSLY RECEIVED USER INPUTS - In a multiuser touch sensitive display device including a touch sensitive display screen capable of displaying a plurality of windows for interacting with a plurality of users simultaneously, each window providing a user interface for a running instance of an application program run by one of the plurality of users for receiving touch sensitive inputs and displaying content output of the application program instance, and a plurality of output ports that can be coupled to a plurality of peripheral devices including audio output devices for generating audio outputs, wherein the multiuser touch sensitive display device runs an operating system module that can simultaneously interact with a plurality of instances of one or more application programs, a method is provided for processing simultaneously received user inputs through the plurality of windows displayed on the touch sensitive display screen.06-05-2014
20140164927Talk Tags - Systems, methods, and computer readable storage mediums are provided to create talk tags in accordance with various embodiments. A digital image is obtained. A user selection of a point of interest within the digital image is received. An expandable data container associated with the point of interest is created. An audio annotation, such as a voice description, of an image is received with respect to the selected point of interest. A pinpoint audio annotation associated with the point of interest is then created and stored. The pinpoint audio annotation can be shared with other users. The other users can respond with additional annotations of the digital image. The additional annotations may be provided within the pinpoint audio annotation or may be associated with other points of interest within the digital image.06-12-2014
20140173439USER INTERFACE FOR OBJECT TRACKING - A method for tracking a location of an object using a mobile communication device having a processor and a display includes generating, by the processor, a graphical user interface for display on the display. The graphical user interface includes a first button representing the object, a second button displaying a signal strength of the object, and a third button displaying an alarm status of the object. The method also includes displaying, in response to actuation of the first button, a settings menu screen associated with the object; displaying, in response to actuation of the second button, an alarm sensitivity menu associated with the object; and toggling on and off, in response to actuation of the third button, an alarm associated with the object.06-19-2014
20140195918EYE TRACKING USER INTERFACE - A method for providing a graphic interface is disclosed. The method includes the steps displaying a set of interface tiles on a display device, detecting a location of a user's gaze, identifying that a user is looking at one tile of the set of interface tiles for a set period of time, displaying a expansion tile along with the set of interface tiles, the expansion tile comprises additional content associated with an identified tile of the set of interface tiles that the user is looking at.07-10-2014
20140201639AUDIO USER INTERFACE APPARATUS AND METHOD - A method comprises converting an audio frequency domain signal into one or more voltage signals. Then the characteristics of the one or more voltage signals are determined. Afterwards the characteristics of the one or more voltage signals are compared with one or more characteristics of an audio trigger command. Activation of an audio user interface is then activated on the basis of the comparison.07-17-2014
20140215339CONTENT NAVIGATION AND SELECTION IN AN EYES-FREE MODE - Techniques are disclosed for facilitating the use of an electronic device having a user interface that is sensitive to a user's gestures. An “eyes-free” mode is provided in which the user can control the device without looking at the device display. Once the eyes-free mode is engaged, the user can control the device by performing gestures that are detected by the device, wherein a gesture is interpreted by the device without regard to a specific location where the gesture is made. The eyes-free mode can be used, for example, to look up a dictionary definition of a word in an e-book or to navigate through and select options from a hierarchical menu of settings on a tablet. The eyes-free mode advantageously allows a user to interact with the user interface in situations where the user has little or no ability to establish concentrated visual contact with the device display.07-31-2014
20140215340CONTEXT BASED GESTURE DELINEATION FOR USER INTERACTION IN EYES-FREE MODE - Techniques are disclosed for facilitating the use of an electronic device having a user interface that is sensitive to a user's gestures. An “eyes-free” mode is provided in which the user can control the device without looking at the device display. Once the eyes-free mode is engaged, the user can control the device by performing gestures that are detected by the device, wherein a gesture is interpreted by the device without regard to a specific location where the gesture is made. The eyes-free mode can be used, for example, to look up a dictionary definition of a word in an e-book or to navigate through and select options from a hierarchical menu of settings on a tablet. The eyes-free mode advantageously allows a user to interact with the user interface in situations where the user has little or no ability to establish concentrated visual contact with the device display.07-31-2014
20140223310Correction Menu Enrichment with Alternate Choices and Generation of Choice Lists in Multi-Pass Recognition Systems - A method is described for user correction of speech recognition results. A speech recognition result for a given unknown speech input is displayed to a user. A user selection is received of a portion of the recognition result needing to be corrected. For each of multiple different recognition data sources, a ranked list of alternate recognition choices is determined which correspond to the selected portion. The alternate recognition choices are concatenated or interleaved together and duplicate choices removed to form a single ranked output list of alternate recognition choices, which is displayed to the user. The method may be adaptive over time to derive preferences that can then be leveraged in the ordering of one choice list or across choice lists.08-07-2014
20140258868VIDEO PHONE SYSTEM - A system allocates channel bandwidth based on the data received from a plurality of remote sources. A de-multiplexer/priority circuit separates two or more different data streams into their components parts. A stream modification driver modifies one or more characteristics of the data received from the de-multiplexer/priority circuit based on a priority assigned to the data by the de-multiplexer/priority circuit. The de-multiplexer/priority circuit determines the data transfer rates for each of the different data streams based on the assigned priority.09-11-2014
20140282002Method and Apparatus for Facilitating Use of Touchscreen Devices - Exemplary embodiments are described wherein an auxiliary sensor attachable to a touchscreen computing device provides an additional form of user input. When used in conjunction with an accessibility process in the touchscreen computing device, wherein the accessibility process generates audible descriptions of user interface features shown on a display of the device, actuation of the auxiliary sensor by a user affects the manner in which concurrent touchscreen input is processed and audible descriptions are presented.09-18-2014
20140282003CONTEXT-SENSITIVE HANDLING OF INTERRUPTIONS - A list of notification items is received, the list including a plurality of notification items, wherein each respective one of the plurality of notification items is associated with a respective urgency value. An information item is detected. In some implementations, the information item is a communication (e.g., an email). In some implementations, the information item is a change in context of a user. Upon determining that the information item is relevant to the urgency value of the first notification item, the urgency value of the first notification item is adjusted. Upon determining that the adjusted urgency value satisfies the predetermined threshold, a first audio prompt is provided to a user.09-18-2014
20140282004System and Methods for Recording and Managing Audio Recordings - A system and method for recording and managing a plurality of sound takes for an at least one sound part associated with a sound project are disclosed. In at least one embodiment, a primary user is capable of selectively adding at least one sound part to the sound project, and at least one sound take for a given sound part. The primary user is also able to select one of the previously recorded sound takes for each of the at least one sound parts for simultaneous playback during the recording of any new sound takes.09-18-2014
20140289630Systems and Methods for Semi-Automatic Audio Problem Detection and Correction - One exemplary embodiment involves receiving identifications of audio problems in a segment of audio and identifications of corrections for applying to attempt to correct the audio problems, wherein the audio problems were identified by a device applying one or more audio problem detection algorithms to the segment of audio. The exemplary embodiment further involves displaying a user interface comprising representations of the audio problems and representations of the corrections and, in response to receiving a command through the user interface to initiate application of a correction of the corrections, initiating application of the correction.09-25-2014
20140289631INPUT APPARATUS, INPUT METHOD, AND INPUT PROGRAM - An operation input unit detects an operation position corresponding to an operation input; a display processing unit changes a display item to be displayed on a display unit of a plurality of items, depending on a change of the operation position; a processing control unit continues a process of changing the display item to be displayed on the display unit until a predetermined input is received, while a vehicle is moving, and when an operation speed of moving the operation position is higher than a preset threshold of the operation speed, or an operation acceleration at which the operation speed changes is higher than a preset threshold of the operation acceleration; and an item selecting unit selects any of the plurality of items based on the predetermined input.09-25-2014
20140298176SCROLLING TECHNIQUES FOR USER INTERFACES - Systems and methods for improving the scrolling of user interfaces of electronic devices are provided.10-02-2014
20140304604INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - Provided is an information processing device including a display control unit configured to initiate a selective displaying of content screens, and a sound output control unit configured to generate localization information of a notification sound associated with a first content screen that is not currently being displayed and initiate an outputting of the notification sound to a user in accordance with the localization information while a second content screen is being displayed.10-09-2014
20140337739Method and Mobile Device for Activating Voice Intercom Function of Instant Messaging Application Software - A method and a mobile device are described for activating voice intercom function of instant messaging application software. In the method, a touch screen detects a touch action. The mobile device determines whether the touch point of the touch action is located in a preset location area and whether the touch action is a preset touch action. The mobile device activates voice intercom function of instant messaging application software if both determination results are affirmative. The method and mobile device are configured to conveniently activate voice intercom function of instant messaging application software.11-13-2014
20140359449AUTOMATED GENERATION OF AUDIBLE FORM - A method for generating an audible information request form, comprising, receiving web page data for a website, analyzing the web page data to determine attributes related to a user interactive web element of the website, generating an audible form comprising one or more audible information inquiries based on the user interactive web element and establishing a telephone connection with a phone device associated with a second user. In certain aspects, the method further comprises steps for prompting the second user by providing the one or more audible information inquiries to the phone device, receiving audible response information from the phone device, wherein the audible response information comprises one or more responses related to the one or more audible information inquiries; and generating a request form based on the one or more responses. Systems and computer-readable media are also provided.12-04-2014
20140359450ACTIVATING A SELECTION AND A CONFIRMATION METHOD - An apparatus, method, and computer program product for: receiving an indication of a pre-defined gesture detected by a motion tracking device, in response to receiving the indication of the detected pre-defined gesture, activating a selection method, wherein the selection method is dependent on motion detected by the motion tracking device, and activating a confirmation method for confirming a selection, wherein the confirmation method is independent of motion detected by the motion tracking device.12-04-2014
20140365895DEVICE AND METHOD FOR GENERATING USER INTERFACES FROM A TEMPLATE - An electronic device is configured to receive a first request to display a user interface of a first third-party application on a respective display that is in communication with the device. The device is further configured to, in response to receiving the request, obtain a first user-interface template configured to be used by a plurality of third-party applications, and request, from the first application, one or more values for populating the first template. The device is also configured to receive, from the first application, while the first application is running on the device, a first set of values for populating the first template; populate the first template with the first set of values; generate a first user interface for the first application using the first template populated with the first set of values; and send information to the respective display that enables the first user interface to be displayed.12-11-2014
20150019973MEMORIZATION SYSTEM AND METHOD - A system for memorization of content includes a memory storing the content. The memory includes a tangible computer readable medium with an instruction set, and at least one database. The database has information corresponding to the content. A processor communicating with the memory is configured to: execute the instruction set to present information corresponding to an original discrete portion of the content, and conduct an accuracy analysis of at least one version of the discrete portion audibly repeated by a user. An audio input device communicating with the processor receives the version from the user for use in the accuracy analysis by the processor. An input device inputs a user predetermined quantity of repetitions of the version presented to the user prior to the user progressing to a next discrete portion of the original discrete portion when a user predetermined accuracy threshold of the accuracy analysis is met.01-15-2015
20150046824SYNCHRONIZED DISPLAY AND PERFORMANCE MAPPING OF MUSICAL PERFORMANCES SUBMITTED FROM REMOTE LOCATIONS - Systems and methods are provided for assembling and displaying a visual ensemble of musical performances that were created and uploaded from one or more locations that are remote from a host of the network, a director or other administrator reviewing submissions for selection and assembly, or perhaps merely remote from one or more other submissions received over a computer network. The assembled performances include a plurality of submissions, the submissions including performances created and uploaded at one or more locations remote from the location of the director for the assembly and display over the computer network. Systems and methods are also included for mapping one performance against another performance qualitatively, quantitatively, in real-time, or some combination thereof, enabling a musician, or a reviewer of performances, in the assessment of one performance relative to another performance.02-12-2015
20150121227Systems and Methods for Communicating Notifications and Textual Data Associated with Applications - Embodiments are provided for communicating notifications and other textual data associated with applications installed on an electronic device. According to certain aspects, a user can interface with an input device to send (04-30-2015
20150121228PHOTOGRAPHING IMAGE CHANGES - Provided herein is a control method of an electronic device. A gesture is detecting and a plurality of images of a user are photographed, if the gesture substantially corresponds to a predetermined gesture. A predetermined function is performed, if the change detected in the images is less than a predetermined threshold during a predetermined time period.04-30-2015
20150135078USER INTERFACE CONTROL IN PORTABLE SYSTEM - This document discloses a portable system comprising a physical activity monitoring device comprising: a wireless proximity detection module configured to detect a proximity of an input control entity with respect to the physical activity monitoring device and output a control signal as a response to the detection, wherein the proximity is a non-zero distance between the input control entity and the training computer; and a user interface controller configured to generate, as a response to the control signal from the wireless proximity detection module, at least one of an audio control function and a display control function.05-14-2015
20150135079BLIND CONTROL SYSTEM FOR A VEHICLE - A blind control system for a vehicle includes a touch screen configured to detect an input operation of a user and a position of the input, output a corresponding signal, and display an operated menu item, an accelerator pedal sensor configured to detect an operation of an accelerator pedal and output a corresponding signal, a storage configured to store a non-driving mode including a plurality of set menu items, and a driving mode including a plurality of set menu items; and a controller configured to selectively execute the non-driving mode or the driving mode stored in the storage according to an input signal of the accelerator pedal sensor, execute a corresponding menu item according to an input operation of the touch screen, and a position of the input when a currently executed mode is the non-driving mode or the driving mode.05-14-2015
20150293655METHOD FOR OUTPUTTING A MODIFIED AUDIO SIGNAL AND GRAPHICAL USER INTERFACES PRODUCED BY AN APPLICATION PROGRAM - According to various embodiments, a method for outputting a modified audio signal may be provided. The method may include: receiving from a user an input indicating an angle; determining a parameter for a head-related transfer function based on the received input indicating the angle; modifying an audio signal in accordance with the head-related transfer function based on the determined parameter; and outputting the modified audio signal.10-15-2015
20150317123Techniques to Handle Multimedia Questions from Attendees in an Online Meeting - An attendee device in an online meeting displays content from a presenter device in a shared area of an attendee device display. The attendee device detects that the shared area is pressed continuously at a press point therein for a predetermined time and, in response, records a location of the press point in the shared area, records an image snapshot of the shared area, and records audio sensed by a local microphone. The attendee device also detects when the press point is released and, in response, ends the audio recording. The attendee device displays a dialog box that presents user selectable options to store locally, upload to the meeting server, and not retain any of the recorded snapshot and the recorded audio.11-05-2015
20150331657METHODS AND APPARATUS FOR AUDIO OUTPUT COMPOSITION AND GENERATION - According to the invention there is provided a method of generating an audio output comprising the steps of: (a) providing one or more indicia representative of an audio sequence on a user interface; (b) detecting one or more user interactions with the user interface in a 15 physical space associated with the one or more indicia; (c) determining whether a timing of the one or more the user interactions corresponds with a timing of the audio sequence represented by the one or more indicia; and (d) dependent on the determination, outputting the audio sequence as an audio output.11-19-2015
20150346912MESSAGE USER INTERFACES FOR CAPTURE AND TRANSMITTAL OF MEDIA AND LOCATION CONTENT - A device provides user interfaces for capturing and sending media, such as audio, video, or images, from within a message application. The device detects a movement of the device and in response, plays or records an audio message. The device detects a movement of the device and in response, sends a recorded audio message. The device removes messages from a conversation based on expiration criteria. The device shares a location with one or more message participants in a conversation.12-03-2015
20150370458RESPONDING TO USER INPUT INCLUDING PROVIDING USER FEEDBACK - An apparatus and a method for responding to user input by way of a user interface for an apparatus that employs a display detect user input associated with the display during a static screen condition on the display wherein a static image provided by a source image provider is displayed on the display. In response to detecting the user input, the method and apparatus provide user feedback by incorporating a first type of change to the static image displayed on the display while the source image provider is in a reduced power mode wherein a standby power is available to the source image provider and communicate control information to the source image provider. The method and apparatus receive from the source image provider updated image content based on the communicated control information.12-24-2015
20160004405SINGLE-CHANNEL OR MULTI-CHANNEL AUDIO CONTROL INTERFACE - A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.01-07-2016
20160019025METHOD AND APPARATUS FOR AN INTERACTIVE USER INTERFACE - A method, apparatus and computer program product are provided to facilitate user interaction with, such as modification of, respective audio objects. An example method may include causing a multimedia file to be presented that includes at least two images. The images are configured to provide animation associated with respective audio objects and representative of a direction of the respective audio objects. The method may also include receiving user input in relation to an animation associated with an audio object or the direction of the audio object represented by an animation. The method may further include causing replay of the audio object for which the user input was received to be modified.01-21-2016
20160026432Method to broadcast internet voice news - A voice internet system comprises voice websites and internet broadcasting devices. When an internet broadcasting device logs into a voice website through Internet, it will broadcast voice news headlines. When a news headline is being broadcasted, pushing a play button on a control panel on the internet broadcasting device or giving a voice command, the corresponding news content under the headline will be broadcasted with voice. After completion of broadcasting the news content, the remaining headlines will be broadcasted.01-28-2016
20160070342DISTRACTED BROWSING MODES - Approaches to enable a computing device, such as a phone or tablet computer, to determine when a user viewing the content is being distracted or is generally viewing the content with a sufficient level of irregularity, and present an audible representation of the content during the times when the user is deemed distracted. The determination of when the user is distracted or is otherwise viewing the content with irregularity can be performed using sensor data captured by one or more sensors of the computing device. For example, the computing device may analyze the image data captured by one or more cameras, such as by tracking the movement/location of eye pupils of the user and/or tracking the head movement of the user to detect when the user is distracted.03-10-2016
20160077795DISPLAY APPARATUS AND METHOD OF CONTROLLING THEREOF - A display apparatus and a method of controlling the display apparatus are provided. The method includes analyzing a display item displayed on a display screen, outputting audio corresponding to the analyzed display item, and in response to the audio being output, displaying a user interface (UI), which includes at least one icon for controlling a display item corresponding to the output audio, in an area of the display screen.03-17-2016
20160117142MULTIPLE-USER COLLABORATION WITH A SMART PEN SYSTEM - A central device concurrently receives handwriting gestures from a plurality of smart pen devices. Each set of handwriting gestures includes a sequence of spatial positions of the corresponding smart pen device with respect to a writing surface. Representations of the handwriting gestures are displayed on a display screen, and the representations show relative timing between the different sets of handwriting gestures. In one embodiment, a portion of the received handwriting gestures is outputted for display.04-28-2016
20160117147USER INTERFACE FOR RECEIVING USER INPUT - The present disclosure relates to user interfaces for receiving user input. In some examples, a device determines which user input technique a user has accessed most recently, and displays the corresponding user interface. In some examples, a device scrolls through a set of information on the display. When a threshold criteria is satisfied, the device displays an index object fully or partially overlaying the set of information. In some examples, a device displays an emoji graphical object, which is visually manipulated based on user input. The emoji graphical object is transmitted to a recipient. In some examples, a device displays paging affordances that enlarge and allow a user to select a particular page of a user interface. In some examples, the device displays user interfaces for various input methods, including multiple emoji graphical objects. In some examples, a keyboard is displays for receiving user input.04-28-2016
20160132292Method for Controlling Voice Emoticon in Portable Terminal - Disclosed is a method for controlling voice emoticons in a portable terminal for providing a recipient portable terminal with various voice files according to the emotions and feelings of the user in place of text-based emoticons, thereby enabling the various voice files to be played and to express rich emotions compared to the existing monotonous and dry TTS-based voice files. The present invention comprises the steps of: displaying a voice emoticon call menu for calling a voice emoticon menu on one area of a touch screen; displaying the voice emoticon menu provided with a voice emoticon list after the voice emoticon call menu is user-selected; and transmitting a voice emoticon user-selected from the voice emoticon list to a recipient portable terminal in place of the voice of the user.05-12-2016
20160139877VOICE-CONTROLLED DISPLAY DEVICE AND METHOD OF VOICE CONTROL OF DISPLAY DEVICE - The present invention is to provide a voice-controlled display device configured such that the inputted user's speech is compared with the identification voice data assigned to each of the execution unit areas on a screen displayed through a display unit and, if there exists identification voice data corresponding to the user's speech, an execution signal is generated to the execution unit area to which the identification voice data is assigned to resolve the inconvenience that the user needs to learn the voice commands stored in the database and to apply the convenience and intuitive simplicity of user experience (UX) of the conventional touchscreen control to the voice control, and a method of voice control of the above display device05-19-2016
20160149767INTERNET OF THINGS DEVICE FOR REGISTERING USER SELECTIONS - A platform, apparatus and method for Internet of Things Implementations. For example, one embodiment of an apparatus comprises: a memory for storing program code and a microcontroller for executing the program code; a communication interface for coupling the microcontroller to a network; a plurality of input elements communicatively coupled to the microcontroller to detect user input; a slot for receiving a selection card, the selection card comprising a plurality of user-selectable items displayed thereon, wherein each of the input elements are associated with at least one of the user-selectable items displayed on the card when the selection card is inserted in the slot; and wherein upon selection of a particular input element corresponding to a particular item, the microcontroller transmits an identification code for the item to a service over the network, the service identifying the item using the identification code and performing one or more operations responsive to selection of the item by the user.05-26-2016
20160149959Controlling a PBX Phone Call Via a Client Application - In one or more embodiments, a hit test thread which is separate from the main thread, e.g. the user interface thread, is utilized for hit testing on web content. Using a separate thread for hit testing can allow targets to be quickly ascertained. In cases where the appropriate response is handled by a separate thread, such as a manipulation thread that can be used for touch manipulations such as panning and pinch zooming, manipulation can occur without blocking on the main thread. This results in the response time that is consistently quick even on low-end hardware over a variety of scenarios.05-26-2016
20160162260AUDIO LOCALIZATION TECHNIQUES FOR VISUAL EFFECTS - Techniques for improved audio localization for visual effects are described. In one embodiment, for example, an apparatus may comprise a processor circuit and an audio management module, and the audio management module may be operable by the processor circuit to determine a position of a user interface element in a presentation area, determine an audio effect corresponding to the user interface element, determine audio location information for the audio effect based on the position of the user interface element, the audio location information defining an apparent position for the audio effect, and generate audio playback information for the audio effect based on the audio location information. Other embodiments are described and claimed.06-09-2016
20160170626USER FRIENDLY INTERFACE06-16-2016
20160170709DEVICE AND METHOD FOR CONTROLLING SOUND OUTPUT06-16-2016
20160179211SYSTEMS AND METHODS FOR TRIGGERING ACTIONS BASED ON TOUCH-FREE GESTURE DETECTION06-23-2016
20160179456Spontaneous Collaboration Apparatus, System and Methods thereof06-23-2016
20160196110Multimodal State Circulation07-07-2016
20160203797MULTIPLE PRIMARY USER INTERFACES07-14-2016
20160253050SYSTEM AND METHOD FOR AUDIO AND TACTILE BASED BROWSING09-01-2016
20160378431Portable media device with audio prompt menu - Once an audio prompt has been stored on the portable media device, the audio prompt menu is played. Subsequently, an input from a user of the portable media device is then received in response to the audio prompt menu. A command is subsequently transmitted to a remote computer. The command requests the remote computer to perform an action based on the user's input The portable media device includes a portable media device housing containing a processor, a power source, a user interface device, communications circuitry, at least one input/output (i/o) port, and a memory. The memory includes an operating system, a media database, communication procedures for communicating with a remote computer, and instructions for performing the above described method.12-29-2016
20160379395METHODS AND DEVICES FOR PRESENTING DYNAMIC INFORMATION GRAPHICS - The present disclosure relates to systems, methods, electronic devices and applications for presenting a user interface including a dynamic information graphic. In one embodiment, a method includes detecting an operational mode of the device for conducting a communication session by device, and presenting a user interface for the communication session including a dynamic information graphic, wherein the dynamic information graphic includes one more graphical elements based on the operational mode. The method may also include detecting one or more parameters for the communication session and updating presentation of the user interface and display of the dynamic information graphic based on the one or more parameters, wherein presentation and configuration of the dynamic information graphic provides a visual representation based on device actions during the operational mode. Another embodiment is directed to a device configured to present a dynamic information graphic.12-29-2016
20190146752DISPLAY APPARATUS AND CONTROL METHOD THEREOF05-16-2019
20220139396METHODS AND USER INTERFACES FOR VOICE-BASED CONTROL OF ELECTRONIC DEVICES - The present disclosure generally relates to voice-control for electronic devices. In some embodiments, the method includes, in response to detecting a plurality of utterances, associating the plurality of operations with a first stored operation set and detecting a second set of one or more inputs corresponding to a request to perform the operations associated with the first stored operation set; and performing the plurality of operations associated with the first stored operation set, in the respective order.05-05-2022

Patent applications in class Audio user interface

Patent applications in all subclasses Audio user interface

Website © 2023 Advameg, Inc.