Class / Patent application number | Description | Number of patent applications / Date published |
434116000 | Converting information to sound | 8 |
20100112530 | Real-time interpreting systems & methods - Systems, methods and computer program products for the provision and enabling of multi-language and sign language services in real-time are disclosed. System components include a call processing computer/server that receives requests and is in communication with other system components, a video server/computer for processing and relaying video images, user interface devices for making requests and receiving data transmitted between system components and service provider devices for responding to or satisfying requests received. | 05-06-2010 |
20110207094 | METHOD FOR TRAINING SPEECH PERCEPTION AND TRAINING DEVICE - The speech perception of hearing-aid wearers and wearers of other hearing devices is intended to be improved. To this end, a method for training the speech perception of a person, who is wearing a hearing device, is provided, in which a first speech component is presented acoustically and the latter is identified by the person wearing the hearing device. Subsequently, there is automated modification of the acoustic presentation of the presented speech component and the aforementioned steps are repeated with the modified presentation until, if the identification is incorrect, a prescribed maximum number of repetitions has been reached. Otherwise, if the first speech component is identified correctly or if the number of incorrect identifications of the first speech component is one more than the maximum repetition number, a second speech component is presented acoustically. This allows a plurality of speech components to be trained in respectively a number of steps. | 08-25-2011 |
20110300516 | Tactile Tile Vocalization - Braille symbols are automatically read aloud, to aid in learning or using Braille. More generally, a tile which bears a tactile symbol and a corresponding visual symbol is placed in a sensing area, automatically distinguished from other tiles, and vocalized. The tile is sensed and distinguished from other tiles based on various signal mechanisms, or by computer vision analysis of the tile's visual symbol. Metadata is associated with the tile. Additional placed tiles are similarly sensed, identified, and vocalized. When multiple tiles are placed in the sensing area, they are vocalized individually, and an audible phrase spelled by their arrangement of tactile symbols is also produced. A lattice is provided with locations for receiving tiles. Metadata are associated with lattice locations. Tile placement is used to control an application program which responds to tile identifications. | 12-08-2011 |
20130209970 | Method for Training Speech Recognition, and Training Device - Speech recognition is improved for wearers of hearing aids and other hearing devices by training the speech recognition. A first speech element is acoustically presented, and the element is identified by the person wearing the hearing device. Subsequently, the acoustic presentation of the presented speech element is automatically changed and the aforementioned steps are repeated (S | 08-15-2013 |
20140272815 | APPARATUS AND METHOD FOR PERFORMING ACTIONS BASED ON CAPTURED IMAGE DATA - An apparatus and method are provided for performing one or more actions based on triggers detecting within captured image data. In one implementation, a method is provided for audibly reading text retrieved from a captured image. According to the method, real-time image data is captured from an environment of a user, and an existence of a trigger is determined within the captured image data. In one aspect, the trigger may be associated with a desire of the user to hear text read aloud, and the trigger identifies an intermediate portion of the text a distance from a level break in the text. The method includes performing a layout analysis on the text to identify the level break associated with the trigger, and reading aloud text beginning from the level break associated with the trigger. | 09-18-2014 |
20160171907 | IMAGING GLOVES INCLUDING WRIST CAMERAS AND FINGER CAMERAS | 06-16-2016 |
20160379054 | MULTICOMPONENT OPTICAL DEVICE FOR VISUAL AND AUDIBLE TRANSLATION AND RECOGNITION - The present disclosure relates generally to multicomponent optical devices having a space within the device. In various embodiments, an optical device comprises a first posterior component having an anterior surface, a posterior support component, and an anterior component having a posterior surface. An optical device can also comprise an anterior skirt. The first posterior component and the anterior skirt can comprise gas-permeable optical materials. An optical device also comprises a primary space between the posterior surface and the anterior surface, with the primary space configured to permit diffusion of a gas from a perimeter of the primary space through the space and across the anterior surface of the first posterior component. A method of forming a multicomponent optical device having a space is also provided. Multicomponent optical devices comprise contact lenses and/or spectacles, alone or in combination, that provide for the ability to translate languages by visual and/or audio means and/or may be able to recognize locations, objects, shapes and the like and provide an audio or visual description of the object to a user of the multicomponent optical device. | 12-29-2016 |
20220139258 | SPATIAL WEATHER MAP FOR THE VISUALLY IMPAIRED - A method, computer system, and computer program product for providing spatial weather map data to the visually impaired are provided. The embodiments may include receiving a weather map image that contains weather information and a subject of interest, wherein the subject of interest comprises a user current location, a business location, or an asset location. The embodiments may also include receiving a request from a user for current time, historical or forecast data depicted in the weather map and a request for a geo-location of the subject of interest. The embodiments may further include generating sounds corresponding with the requested weather map data, wherein the generated sounds appear to the user to originate in a particular direction and at a particular distance from the subject of interest, wherein the apparent location of the sounds correspond with a weather map feature including the location of approaching or surrounding weather. | 05-05-2022 |