Patent application number | Description | Published |
20120171668 | ENHANCED DEPOSITION OF CHROMOGENS UTILIZING PYRIMIDINE ANALOGS - This disclosure relates to compositions that enhance the deposition of detectable moieties on tissue samples, methods utilizing these compositions and kits including these compositions. The compositions include a deposition enhancer having a formula | 07-05-2012 |
20120219948 | APPLICATION OF QUANTUM DOTS FOR NUCLEAR STAINING - Embodiments of a system, method, and kit for visualizing a nucleus are disclosed. A tissue sample is pretreated with a protease to permeabilize the nucleus, and then incubated with a nanoparticle/DNA-binding moiety conjugate. The DNA-binding moiety includes at least one DNA-binding molecule. The conjugate binds to DNA within the nucleus, and the nanoparticle is visualized, thereby visualizing the nucleus. Computer and image analysis techniques are used to evaluate nuclear features such as chromosomal distribution, ploidy, shape, size, texture features, and/or contextual features. The method may be used in combination with other multiplexed tests on the tissue sample, including fluorescence in situ hybridization. Kits for performing the method include a protease enzyme composition, a nanoparticle/DNA-binding moiety conjugate, and a reaction buffer. | 08-30-2012 |
20130034854 | Antibody-Nanoparticle Conjugates and Methods for Making and Using Such Conjugates - Disclosed herein are antibody-nanoparticle conjugates that include two or more nanoparticles (such as gold, palladium, platinum, silver, copper, nickel, cobalt, iridium, or an alloy of two or more thereof) directly linked to an antibody or fragment thereof through a metal-thiol bond. Methods of making the antibody-nanoparticle conjugates disclosed herein include reacting an arylphosphine-nanoparticle composite with a reduced antibody to produce an antibody-nanoparticle conjugate. Also disclosed herein are methods for detecting a target molecule in a sample that include using an antibody-nanoparticle conjugate (such as the antibody-nanoparticle conjugates described herein) and kits for detecting target molecules utilizing the methods disclosed herein. | 02-07-2013 |
20130109019 | HAPTEN CONJUGATES FOR TARGET DETECTION | 05-02-2013 |
20150024405 | ENHANCED DEPOSITION OF CHROMOGENS UTILIZING PYRIMIDINE ANALOGS - This disclosure relates to compositions that enhance the deposition of detectable moieties on tissue samples, methods utilizing these compositions and kits including these compositions. The compositions include a deposition enhancer having a formula | 01-22-2015 |
20150267262 | APPLICATION OF QUANTUM DOTS FOR NUCLEAR STAINING - Embodiments of a system, method, and kit for visualizing a nucleus are disclosed. A tissue sample is pretreated with a protease to permeabilize the nucleus, and then incubated with a nanoparticle/DNA-binding moiety conjugate. The DNA-binding moiety includes at least one DNA-binding molecule. The conjugate binds to DNA within the nucleus, and the nanoparticle is visualized, thereby visualizing the nucleus. Computer and image analysis techniques are used to evaluate nuclear features such as chromosomal distribution, ploidy, shape, size, texture features, and/or contextual features. The method may be used in combination with other multiplexed tests on the tissue sample, including fluorescence in situ hybridization. Kits for performing the method include a protease enzyme composition, a nanoparticle/DNA-binding moiety conjugate, and a reaction buffer. | 09-24-2015 |
Patent application number | Description | Published |
20120283750 | MENISCUS REPAIR - Methods for repairing a meniscus, and particularly a torn meniscus. A method of repairing a meniscus may include using a suture passer to pass a suturing element from the region between the superior surface of the meniscus and the femoral condyle, through the meniscus tissue, into the region between the inferior surface of the meniscus and the tibial plateau, across the inferior surface of the meniscus, and back to the superior surface of the meniscus, without deeply penetrating the posterior capsular region of the knee. Equivalently, the suture element may be passed from the inferior surface of the meniscus to the superior surface and back to the inferior surface. | 11-08-2012 |
20120283753 | SUTURE PASSER DEVICES AND METHODS - Devices, systems and methods for passing a suture. In general, described herein are suturing devices, such as suture passers, as well as methods of suturing tissue. These suture passing devices are dual deployment suture passers in which a first distal jaw member is moveable at an angle with respect to the longitudinal axis of the elongate body of the device and the second distal jaw member is retractable proximally to the distal end region of the elongate body and/or the first jaw member. Methods of suturing tissue using a dual deployment suture passer are also described. | 11-08-2012 |
20120283754 | SUTURE PASSER DEVICES AND METHODS - Devices, systems and methods for passing a suture. In general, described herein are suturing devices, such as suture passers, as well as methods of suturing tissue. These suture passing devices may include dual deployment suture passers in which a first distal jaw member is movable at an angle with respect to the longitudinal axis of the elongate body of the device and the second distal jaw member is retractable proximally to the distal end region of the elongate body and/or the first jaw member. Also described herein are suture passers in which the tissue penetrator passing the suture travels in an approximately sigmoidal pathway, with the distal end of the tissue penetrator extending distally from one jaw of the device. | 11-08-2012 |
20130331865 | SUTURE PASSER DEVICES AND METHODS - Devices, systems and methods for passing a suture. In general, described herein are suturing devices, such as suture passers, as well as methods of suturing tissue. These suture passing devices may include dual deployment suture passers in which a first distal jaw member is moveable at an angle with respect to the longitudinal axis of the elongate body of the device and the second distal jaw member is retractable proximally to the distal end region of the elongate body and/or the first jaw member. Also described herein are suture passers in which the tissue penetrator passing the suture travels in an approximately sigmoidal pathway, with the distal end of the tissue penetrator extending distally from one jaw of the device. | 12-12-2013 |
20140074157 | PRE-TIED SURGICAL KNOTS FOR USE WITH SUTURE PASSERS - Sutures with pre-tied knots for use in percutaneous surgical procedures. Described herein are pre-tied sutures and methods of using them that may be used with a suture passer for percutaneously suturing tissue, including percutaneously passing and securing a loop of suture around a tear in a meniscus tissue of the knee. A suture with a pre-tied knot may include a length of suture and a knot body on the length of suture, and a leader snare tied to the length of suture by the knot body. The leader snare typically has an opening loop (bight or snare) through which an end of the suture may be passed. The tail of the leader snare may be pulled to remove the leader snare for the knot body and draw the end of the suture through the knot body to close the knot, which can then be tightened to secure the tissue. | 03-13-2014 |
20140236192 | SUTURE PASSER WITH RADIUSED UPPER JAW - Described herein are suture passers that may be used for repair of the meniscus of the knee. These suture passers typically include an elongate body having a pair of arms. One or more of the arms may be radiused at the distal end region relative to the long axis of the device, to better fit between a target tissue and a body non-target tissue (e.g., the curvature of the femoral condyle). The arms may form a distal-facing opening that is configured to fit the target tissue. One arm may be movable in the axial direction (e.g., the direction of the long axis of the device), while the other arm may be bendable. A tissue penetrator may be housed within one of the arms to extend across the distal opening between the arms. Thus, a suture may be passed from a first side of the tissue to a second side. | 08-21-2014 |
20140276981 | SUTURE PASSERS AND METHODS OF PASSING SUTURE - Suture passer devices, including suture passers configured with an axially slideable jaw that includes a tissue-penetrating distal end region. Also described are suture passers including jaws housing tissue penetrating needles to pass suture that are substantially thin. Methods of using such devices to pass a suture through tissue are provided. | 09-18-2014 |
20150088163 | ARTHROSCOPIC KNOT PUSHER AND SUTURE CUTTER - Knot pushers and suture cutter apparatuses to be used arthroscopically, for example, in an arthroscopic knee surgery may be operated with a single control to both lock the suture within the distal end of the apparatus and cut the suture once the knot has been pushed to the appropriate location. The apparatus may include a safety lock preventing deployment of the cutter until the safety lock (e.g., cutter release) has been released. | 03-26-2015 |
20150142022 | ARTHROSCOPIC KNOT PUSHER AND SUTURE CUTTER - Knot pushers and suture cutter apparatuses to be used arthroscopically, for example, in an arthroscopic knee surgery may be operated with a single control to both lock the suture within the distal end of the apparatus and cut the suture once the knot has been pushed to the appropriate location. The apparatus may include a safety lock preventing deployment of the cutter until the safety lock (e.g., cutter release) has been released. | 05-21-2015 |
20150196294 | AUTOMATICALLY RELOADING SUTURE PASSER DEVICES AND METHODS - Suture passers and methods of use. Described herein are suture passers preloaded with suture, including cartridges that couple to a suture passer to form a loaded suture passer; the suture passer may be operated to pass one or more lengths of suture without having to be manually reloaded. In particular, described herein are preloaded and automatically reloading apparatuses typically. | 07-16-2015 |
20150209029 | SUTURE PASSER WITH RADIUSED UPPER JAW - Described herein are suture passers that may be used for repair of the meniscus of the knee. These suture passers typically include an elongate body having a pair of arms. One or more of the arms may be radiused at the distal end region relative to the long axis of the device, to better fit between a target tissue and a body non-target tissue (e.g., the curvature of the femoral condyle). The arms may form a distal-facing opening that is configured to fit the target tissue. One arm may be movable in the axial direction (e.g., the direction of the long axis of the device), while the other arm may be bendable. A tissue penetrator may be housed within one of the aims to extend across the distal opening between the arms. Thus, a suture may be passed from a first side of the tissue to a second side. | 07-30-2015 |
20150297215 | SUTURE PASSERS ADAPTED FOR USE IN CONSTRAINED REGIONS - Described herein suture passer apparatus (devices and systems) that may be used to suture tissue within a narrow, confined space. In particular, described herein are suture passers having an elongate body and a lower jaw member that houses a tissue penetrator that is adapted to extend laterally from the lower jaw member and be deflected by a second jaw member? The second jaw member typically deflects the tissue penetrator either distally or proximally. | 10-22-2015 |
20150313589 | SUTURE PASSERS ADAPTED FOR USE IN CONSTRAINED REGIONS - Described herein suture passer apparatus (devices and systems) that may be used to suture tissue within a narrow, confined space. In particular, described herein are suture passers having an elongate body and a bent lower jaw member that houses a tissue penetrator that is adapted to extend laterally through the bend region, and then be laterally deflected from the lower jaw member to a second jaw (e.g. bendable or pivoting) upper member. | 11-05-2015 |
Patent application number | Description | Published |
20140132499 | DYNAMIC ADJUSTMENT OF USER INTERFACE - Embodiments related to dynamically adjusting a user interface based upon depth information are disclosed. For example, one disclosed embodiment provides a method including receiving depth information of a physical space from a depth camera, locating a user within the physical space from the depth information, determining a distance between the user and a display device from the depth information, and adjusting one or more features of a user interface displayed on the display device based on the distance. | 05-15-2014 |
20140173524 | TARGET AND PRESS NATURAL USER INPUT - A cursor is moved in a user interface based on a position of a joint of a virtual skeleton modeling a human subject. If a cursor position engages an object in the user interface, and all immediately-previous cursor positions within a mode-testing period are located within a timing boundary centered around the cursor position, operation in a pressing mode commences. If a cursor position remains within a constraining shape and exceeds a threshold z-distance while in the pressing mode, the object is activated. | 06-19-2014 |
20140282223 | NATURAL USER INTERFACE SCROLLING AND TARGETING - A user interface is output to a display device. If an element of a human subject is in a first conformation, the user interface scrolls responsive to movement of the element. If the element is in a second conformation, different than the first conformation, objects of the user interface are targeted responsive to movement of the element without scrolling the user interface. | 09-18-2014 |
20150035750 | ERGONOMIC PHYSICAL INTERACTION ZONE CURSOR MAPPING - Users move their hands in a three dimensional (“3D”) physical interaction zone (“PHIZ”) to control a cursor in a user interface (“UI”) shown on a computer-coupled 2D display such as a television or monitor. The PHIZ is shaped, sized, and positioned relative to the user to ergonomically match the user's natural range of motions so that cursor control is intuitive and comfortable over the entire region on the UI that supports cursor interaction. A motion capture system tracks the user's hand so that the user's 3D motions within the PHIZ can be mapped to the 2D UI. Accordingly, when the user moves his or her hands in the PHIZ, the cursor correspondingly moves on the display. Movement in the z direction (i.e., back and forth) in the PHIZ allows for additional interactions to be performed such as pressing, zooming, 3D manipulations, or other forms of input to the UI. | 02-05-2015 |
20150193107 | GESTURE LIBRARY FOR NATURAL USER INPUT - A method to decode natural user input from a human subject. The method includes detection of a gesture and concurrent grip state of the subject. If the grip state is closed during the gesture, then a user-interface (UI) canvas of the computer system is transformed based on the gesture. If the grip state is open during the gesture, then a UI object arranged on the UI canvas is activated based on the gesture. | 07-09-2015 |
20150193124 | VISUAL FEEDBACK FOR LEVEL OF GESTURE COMPLETION - Embodiments are disclosed that relate to providing feedback for a level of completion of a user gesture via a cursor displayed on a user interface. One disclosed embodiment provides a method comprising displaying a cursor having a visual property and moving a screen-space position of the cursor responsive to the user gesture. The method further comprises changing the visual property of the cursor in proportion to a level of completion of the user gesture. In this way, the level of completion of the user gesture may be presented to the user in a location to which the attention of the user is directed during performance of the gesture. | 07-09-2015 |
20150199017 | COORDINATED SPEECH AND GESTURE INPUT - A method to be enacted in a computer system operatively coupled to a vision system and to a listening system. The method applies natural user input to control the computer system. It includes the acts of detecting verbal and non-verbal touchless input from a user of the computer system, selecting one of a plurality of user-interface objects based on coordinates derived from the non-verbal, touchless input, decoding the verbal input to identify a selected action from among a plurality of actions supported by the selected object, and executing the selected action on the selected object. | 07-16-2015 |
20150370349 | ERGONOMIC PHYSICAL INTERACTION ZONE CURSOR MAPPING - Users move their hands in a three dimensional (“3D”) physical interaction zone (“PHIZ”) to control a cursor in a user interface (“UI”) shown on a computer-coupled 2D display such as a television or monitor. The PHIZ is shaped, sized, and positioned relative to the user to ergonomically match the user's natural range of motions so that cursor control is intuitive and comfortable over the entire region on the UI that supports cursor interaction. A motion capture system tracks the user's hand so that the user's 3D motions within the PHIZ can be mapped to the 2D UI. Accordingly, when the user moves his or her hands in the PHIZ, the cursor correspondingly moves on the display. Movement in the z direction (i.e., back and forth) in the PHIZ allows for additional interactions to be performed such as pressing, zooming, 3D manipulations, or other forms of input to the UI. | 12-24-2015 |
Patent application number | Description | Published |
20090055739 | CONTEXT-AWARE ADAPTIVE USER INTERFACE - Technologies, systems, and methods for context-aware adaptation of user interface where monitored context includes ambient environmental and temporal conditions, user state, and the like. For example, when a user has been using an application for a long time, ambient lighting conditions are becoming darker, and the user is inferred to be experiencing increased eye strain and fatigue, the user interface may be adapted by increasing the contrast. Such adaptation may be based on rules, pre-defined or otherwise. The processing of sensor data typically results in context codes and detection of context patterns that may be used to adapt user interface for an optimized user experience. | 02-26-2009 |
20100318293 | RETRACING STEPS - Techniques for creating breadcrumbs for a trail of activity are described. The trail of activity may be created by recording movement information based on inferred actions of walking, not walking, or changing floor levels. The movement information may be recorded with an accelerometer and a pressure sensor. A representation of a list of breadcrumbs may be visually displayed on a user interface of a mobile device, in a reverse order to retrace steps. In some implementations, a compass may additionally or alternatively be used to collect directional information relative to the earth's magnetic poles. | 12-16-2010 |
20110083089 | MONITORING POINTER TRAJECTORY AND MODIFYING DISPLAY INTERFACE - Apparatus and methods for improving touch-screen interface usability and accuracy by determining the trajectory of a pointer as it approaches the touch-screen and modifying the touch-screen display accordingly. The system may predict an object on the display the user is likely to select next. The system may designate this object as a Designated Target Object, or DTO. The system may modify the appearance of the DTO by, for example, changing the size of the DTO, or by changing its shape, style, coloring, perspective, positioning, etc. | 04-07-2011 |
20110173204 | ASSIGNING GESTURE DICTIONARIES - Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures. | 07-14-2011 |
20120105257 | Multimodal Input System - The subject disclosure relates to user input into a computer system, and a technology by which one or more users interact with a computer system via a combination of input modalities. When the input data of two or more input modalities are related, they are combined to interpret an intended meaning of the input. For example, speech when combined with one input gesture has one intended meaning, e.g., convert the speech to verbatim text for consumption by a program, while the exact speech when combined with a different input gesture has a different meaning, e.g., convert the speech to a command that controls the operation of that same program. | 05-03-2012 |
20120109868 | Real-Time Adaptive Output - The subject disclosure relates to a technology by which output data in the form of audio, visual, haptic, and/or other output is automatically selected and tailored by a system, including adapting in real time, to address one or more users' specific needs, context and implicit/explicit intent. State data and preference data are input into a real time adaptive output system that uses the data to select among output modalities, e.g., to change output mechanisms, add/remove output mechanisms, and/or change rendering characteristics. The output may be rendered on one or more output mechanisms to a single user or multiple users, including via a remote output mechanism. | 05-03-2012 |
20120124126 | CONTEXTUAL AND TASK FOCUSED COMPUTING - Concepts and technologies are described herein for contextual and task-focused computing. In accordance with the concepts and technologies disclosed herein, a discovery engine analyzes application data describing applications, recognizes tasks associated with the applications, and stores task data identifying and describing the tasks in a data storage location. The task data is searchable by search engines, indexing and search services, and task engines configured to provide tasks to one or more client devices operating alone or in a synchronized manner, the tasks being provided on demand or based upon activity associated with the one or more client devices. A task engine receives or obtains contextual data describing context associate with the client devices and/or social networking data associated with one or more users of the client devices. Based upon the contextual data and/or the social networking data, the task engine identifies one or more relevant tasks and provides to the client devices information for accessing the relevant tasks, or packaged data corresponding to the relevant tasks. | 05-17-2012 |
20120143681 | ROOM-BASED COMPUTING ENVIRONMENTS - Concepts and technologies for creating and accessing room-based computing environments are disclosed. Resources are categorized and/or bundled into categories or bundles of resources. Resources are associated with the room-based computing environment and various data relating to the resources is stored, including data relating to permissions for accessing the resources. Upon detecting access of the room-based computing environment, a room engine can authenticate an entity associated with the access and determine what contents of the room-based computing environment are to be presented based upon the permissions information and/or other considerations. The environment is generated and presented to the entity via one or more user interfaces. | 06-07-2012 |
20120150784 | Immersive Planning of Events Including Vacations - The subject disclosure is directed towards a web service or the like that assists users in generating a plan, such as a vacation plan. In one aspect, a user chooses a model that generates a plan, including by selecting content objects (e.g., found by searching) corresponding to plan objects. Selection is based upon user input, along with rules, constraints and/or equations associated with the model. A presentation mechanism produces a presentation (e.g., an audiovisual experience) from the content/plan objects, such as a linear narrative, a timeline, a schedule, a calendar, a gallery, a list, and/or a map. The plan may be annotated with annotation data. The plan may be interacted with to re-plan it, and may be saved and/or provided to another user for viewing and/or re-planning. Plan versions may be compared to see the changes made. | 06-14-2012 |
20120150787 | Addition of Plan-Generation Models and Expertise by Crowd Contributorst - The subject disclosure is directed towards a web service that maintains a set of models used to generate plans, such as vacation plans, in which the set of models includes models that are authored by crowd contributors via the service. The models include rules, constraints and/or equations, and may be text based and declarative such that any author can edit an existing model or combination of existing models into a new model. Users can access the models to generate a plan according to user parameters, view a presentation of that plan, and interact to provide new parameters to the model and/or with objects in the plan to modify the plan and view a presentation of the modified plan. | 06-14-2012 |
20120151348 | Using Cinematographic Techniques for Conveying and Interacting with Plan Sagas - The subject disclosure is directed towards obtaining a linear narrative synthesized from a set of objects, such as objects corresponding to a plan, and using cinematographic and other effects to convey additional information with that linear narrative when presented to a user. A user interacts with data from which the linear narrative is synthesized, such as to add transition effects between objects, change the lighting, focus, size (zoom), pan and so forth to emphasize or de-emphasize an object, and/or to highlight a relationship between objects. A user instruction may correspond to a theme (e.g., style or mood), with the effects, possibly including audio, selected based upon that theme. | 06-14-2012 |
20120151350 | Synthesis of a Linear Narrative from Search Content - The subject disclosure is directed towards automatically synthesizing content found via one or more searches into a linear narrative such as a slideshow and/or other audiovisual presentation, for playback to a user. A model in conjunction with user input parameters may assist in obtaining the search content, comprising content objects. The model applies rules, constraints and/or equations to generate a plan comprising plan objects, and a content synthesizer processes the plan objects into the linear narrative. The user may interact to change the input parameters and/or the set of plan objects, resulting in a modified narrative being re-synthesized for playback. | 06-14-2012 |
20120159326 | RICH INTERACTIVE SAGA CREATION - One or more techniques and/or systems are disclosed for creating a saga from signal-rich digital memories. User-related content, such as media elements and/or other signals, are captured and used to generate a digital memory graph, comprising the captured user-related content and associated metadata. An interactive saga of digital media elements is created using the digital memory graph by combining at least a portion of a plurality of digital media elements, from the captured user-related content, based on one or more user interactions. | 06-21-2012 |
20120159341 | INTERACTIONS WITH CONTEXTUAL AND TASK-BASED COMPUTING ENVIRONMENTS - Concepts and technologies are described herein for interacting with contextual and task-focused computing environments. Tasks associated with applications are described by task data. Tasks and/or batches of tasks relevant to activities occurring at a client are identified, and a UI for presenting the tasks is generated. The UIs can include tasks and workflows corresponding to batches of tasks. Workflows can be executed, interrupted, and resumed on demand. Interrupted workflows are stored with data indicating progress, contextual information, UI information, and other information. The workflow is stored and/or shared. When execution of the workflow is resumed, the same or a different UI can be provided, based upon the device used to resume execution of the workflow. Thus, multiple devices and users can access workflows in parallel to provide collaborative task execution. | 06-21-2012 |
20120166411 | DISCOVERY OF REMOTELY EXECUTED APPLICATIONS - A search engine discovers and indexes applications in a search index and receives queries from devices. The search engine is configured to obtain contextual data describing context associated with the devices and/or social networking data associated with one or more users of the devices. Based upon the contextual data and/or the social networking data, the search engine modifies the query and executes the query to identify applications. The search engine generates search results corresponding to the identified applications. The search engine also is configured to generate advertising relevant to the modified query, and to rank the search results in accordance with the query, the contextual data, and/or the social networking data. The ranked search results and the advertising are presented to the client as search results and/or in a web store format. Activity of the client and the search engine can be tracked and reported to authorized entities. | 06-28-2012 |
20120166522 | SUPPORTING INTELLIGENT USER INTERFACE INTERACTIONS - Concepts and technologies are described herein for supporting intelligent user interface interactions. Commands accepted by applications can be published or determined. Before or during access of the application, the commands can be presented at clients to indicate commands available for interfacing with the application. The commands can be presented with information indicating how the user interface and/or input device of the client may be used to execute the available commands. Input received from the client can be compared to the available commands to determine if the input matches an available command. Contextual data relating to the client, preferences, and/or other data also can be retrieved and analyzed to determine the intent of the client. The intent can be used to identify an intended command and to modify the input to match the intended command. The modified input can be transmitted to the application. | 06-28-2012 |
20120293439 | MONITORING POINTER TRAJECTORY AND MODIFYING DISPLAY INTERFACE - Apparatus and methods for improving touch-screen interface usability and accuracy by determining the trajectory of a pointer as it approaches the touch-screen and modifying the touch-screen display accordingly. The system may predict an object on the display the user is likely to select next. The system may designate this object as a Designated Target Object, or DTO. The system may modify the appearance of the DTO by, for example, changing the size of the DTO, or by changing its shape, style, coloring, perspective, positioning, etc. | 11-22-2012 |
20140109023 | ASSIGNING GESTURE DICTIONARIES - Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures. | 04-17-2014 |
20160034249 | SPEECHLESS INTERACTION WITH A SPEECH RECOGNITION DEVICE - Embodiments for interacting with speech input systems are provided. One example provides an electronic device including an earpiece, a speech input system, and a speechless input system. The electronic device further includes instructions executable to present requests to a user via audio outputs, and receive user inputs in response to the requests via a first input mode in which user inputs are made via the speech input system, and also receive user inputs in response to the requests via a second input mode in which responses to the requests are made via the speechless input system. | 02-04-2016 |