Patent application number | Description | Published |
20090240371 | Remote presence system mounted to operating room hardware - A robot system that includes a remote station and a robot face. The robot face includes a camera that is coupled to a monitor of the remote station and a monitor that is coupled to a camera of the remote station. The robot face and remote station also have speakers and microphones that are coupled together. The robot face may be coupled to a boom. The boom can extend from the ceiling of a medical facility. Alternatively, the robot face may be attached to a medical table with an attachment mechanism. The robot face and remote station allows medical personnel to provide medical consultation through the system. | 09-24-2009 |
20100100240 | TELEPRESENCE ROBOT WITH A CAMERA BOOM - A remote controlled robot with a head that supports a monitor and is coupled to a mobile platform. The mobile robot also includes an auxiliary camera coupled to the mobile platform by a boom. The mobile robot is controlled by a remote control station. By way of example, the robot can be remotely moved about an operating room. The auxiliary camera extends from the boom so that it provides a relatively close view of a patient or other item in the room. An assistant in the operating room may move the boom and the camera. The boom may be connected to a robot head that can be remotely moved by the remote control station. | 04-22-2010 |
20100268383 | TELE-PRESENCE ROBOT SYSTEM WITH SOFTWARE MODULARITY, PROJECTOR AND LASER POINTER - A remote control station that accesses one of at least two different robots that each have at least one unique robot feature. The remote control station receives information that identifies the robot feature of the accessed robot. The remote station displays a display user interface that includes at least one field that corresponds to the robot feature of the accessed robot. The robot may have a laser pointer and/or a projector. | 10-21-2010 |
20110011747 | METHOD FOR MAKING A BASE PLATE FOR SUSPENSION ASSEMBLY IN HARD DISK DRIVE - A swage mount that includes a flange, having a first side and a second side, and a cylindrically shaped hub. The hub is primarily comprised of a metal (such as stainless steel), and extends from the second side of the flange, and has an inner surface and an outer surface. The surface of the swage mount is plated with one or more layers of metal, or a combination of metals, which provide a) increased retention torque, and b) increased part cleanliness. This invention may be used in conjunction with surface hardened swage mounts that contain surface protrusions. In this case the metal plating prevents separation of the protrusions from the swage mount, thereby preventing contamination. | 01-20-2011 |
20110050841 | PORTABLE REMOTE PRESENCE ROBOT - A tele-presence system that includes a portable robot face coupled to a remote station. The robot face includes a robot monitor, a robot camera, a robot speaker and a robot microphone. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The portable robot face can be attached to a platform mounted to the ceiling of an ambulance. The portable robot face can be used by a physician at the remote station to provide remote medical consultation. When the patient is moved from the ambulance the portable robot face can be detached from the platform and moved with the patient. | 03-03-2011 |
20110187875 | ROBOT FACE USED IN A STERILE ENVIRONMENT - A robot system that includes a robot face with a monitor, a camera, a speaker and a microphone. The system may include a removable handle attached to the robot face. The robot face may be controlled through a remote controller. The handle can be remove and replaced with another handle. The remote controller can be covered with a sterile drape or sterilized after each use of the system. The handle and remote controller allow the robot to be utilized in a clean environment such as an operating room without requiring the robot face to be sterilized after a medical procedure. The robot face can be attached to a boom with active joints. The robot face may include a user interface that allows a user to individually move the active joints of the boom. | 08-04-2011 |
20110190930 | ROBOT USER INTERFACE FOR TELEPRESENCE ROBOT SYSTEM - A robot system that includes a remote control station and a robot that has a camera, a monitor and a microphone. The robot includes a user interface that allows a user to link the remote control station to access the robot. By way of example, the user interface may include a list of remote control stations that can be selected by a user at the robot site to link the robot to the selected control station. The user interface can display a connectivity prompt that allows a user at the robot site to grant access to the robot. The connectivity prompt is generated in response to a request for access by a remote control station. The robot may include a laser pointer and a button that allows a user at the robot site to turn the laser pointer on and off. | 08-04-2011 |
20110218674 | REMOTE PRESENCE SYSTEM INCLUDING A CART THAT SUPPORTS A ROBOT FACE AND AN OVERHEAD CAMERA - A tele-presence system that includes a cart. The cart includes a robot face that has a robot monitor, a robot camera, a robot speaker, a robot microphone, and an overhead camera. The system also includes a remote station that is coupled to the robot face and the overhead camera. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The remote station can display video images captured by the robot camera and/or overhead camera. By way of example, the cart can be used in an operating room, wherein the overhead camera can be placed in a sterile field and the robot face can be used in a non-sterile field. The user at the remote station can conduct a teleconference through the robot face and also obtain a view of a medical procedure through the overhead camera. | 09-08-2011 |
20120197439 | INTERFACING WITH A MOBILE TELEPRESENCE ROBOT - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 08-02-2012 |
20120197464 | INTERFACING WITH A MOBILE TELEPRESENCE ROBOT - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 08-02-2012 |
20120281056 | PORTABLE REMOTE PRESENCE ROBOT - A tele-presence system that includes a portable robot face coupled to a remote station. The robot face includes a robot monitor, a robot camera, a robot speaker and a robot microphone. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The portable robot face can be attached to a platform mounted to the ceiling of an ambulance. The portable robot face can be used by a physician at the remote station to provide remote medical consultation. When the patient is moved from the ambulance the portable robot face can be detached from the platform and moved with the patient. | 11-08-2012 |
20130325244 | TIME-DEPENDENT NAVIGATION OF TELEPRESENCE ROBOTS - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 12-05-2013 |
20140135990 | REMOTE PRESENCE SYSTEM INCLUDING A CART THAT SUPPORTS A ROBOT FACE AND AN OVERHEAD CAMERA - A tele-presence system that includes a cart. The cart includes a robot face that has a robot monitor, a robot camera, a robot speaker, a robot microphone, and an overhead camera. The system also includes a remote station that is coupled to the robot face and the overhead camera. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The remote station can display video images captured by the robot camera and/or overhead camera. By way of example, the cart can be used in an operating room, wherein the overhead camera can be placed in a sterile field and the robot face can be used in a non-sterile field. The user at the remote station can conduct a teleconference through the robot face and also obtain a view of a medical procedure through the overhead camera. | 05-15-2014 |
20140207286 | INTERFACING WITH A MOBILE TELEPRESENCE ROBOT - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 07-24-2014 |
20140267549 | ENHANCED VIDEO INTERACTION FOR A USER INTERFACE OF A TELEPRESENCE NETWORK - A telepresence device may relay video, audio, and/or measurement data to a user operating a control device. A user interface may permit the user to quickly view and/or understand temporally and/or spatially disparate information. The telepresence device may pre-gather looped video of spatially disparate areas in an environment. A temporal control mechanism may start video playback at a desired point in a current or historical video segment. Notations may be associated with time spans in a video and recalled by capturing an image similar to a frame in the time span of the video. An area of interest may be selected and video containing the area of interest may be automatically found. Situational data may be recorded and used to recall video segments of interest. The telepresence device may synchronize video playback and movement. A series of videos may be recorded at predetermined time intervals to capture visually trending information. | 09-18-2014 |
20140267552 | PORTABLE REMOTE PRESENCE ROBOT - A tele-presence system that includes a portable robot face coupled to a remote station. The robot face includes a robot monitor, a robot camera, a robot speaker and a robot microphone. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The portable robot face can be attached to a platform mounted to the ceiling of an ambulance. The portable robot face can be used by a physician at the remote station to provide remote medical consultation. When the patient is moved from the ambulance the portable robot face can be detached from the platform and moved with the patient. | 09-18-2014 |
20150038983 | TELE-PRESENCE ROBOT SYSTEM WITH SOFTWARE MODULARITY, PROJECTOR AND LASER POINTER - A remote control station that accesses one of at least two different robots that each have at least one unique robot feature. The remote control station receives information that identifies the robot feature of the accessed robot. The remote station displays a display user interface that includes at least one field that corresponds to the robot feature of the accessed robot. The robot may have a laser pointer and/or a projector. | 02-05-2015 |
20150127156 | TELEPRESENCE ROBOT WITH A CAMERA BOOM - A remote controlled robot with a head that supports a monitor and is coupled to a mobile platform. The mobile robot also includes an auxiliary camera coupled to the mobile platform by a boom. The mobile robot is controlled by a remote control station. By way of example, the robot can be remotely moved about an operating room. The auxiliary camera extends from the boom so that it provides a relatively close view of a patient or other item in the room. An assistant in the operating room may move the boom and the camera. The boom may be connected to a robot head that can be remotely moved by the remote control station. | 05-07-2015 |
20150286789 | REMOTE PRESENCE SYSTEM INCLUDING A CART THAT SUPPORTS A ROBOT FACE AND AN OVERHEAD CAMERA - A tele-presence system that includes a cart. The cart includes a robot face that has a robot monitor, a robot camera, a robot speaker, a robot microphone, and an overhead camera. The system also includes a remote station that is coupled to the robot face and the overhead camera. The remote station includes a station monitor, a station camera, a station speaker and a station microphone. The remote station can display video images captured by the robot camera and/or overhead camera. By way of example, the cart can be used in an operating room, wherein the overhead camera can be placed in a sterile field and the robot face can be used in a non-sterile field. The user at the remote station can conduct a teleconference through the robot face and also obtain a view of a medical procedure through the overhead camera. | 10-08-2015 |
20150296177 | ENHANCED VIDEO INTERACTION FOR A USER INTERFACE OF A TELEPRESENCE NETWORK - A telepresence device may relay video, audio, and/or measurement data to a user operating a control device. A user interface may permit the user to quickly view and/or understand temporally and/or spatially disparate information. The telepresence device may pre-gather looped video of spatially disparate areas in an environment. A temporal control mechanism may start video playback at a desired point in a current or historical video segment. Notations may be associated with time spans in a video and recalled by capturing an image similar to a frame in the time span of the video. An area of interest may be selected and video containing the area of interest may be automatically found. Situational data may be recorded and used to recall video segments of interest. The telepresence device may synchronize video playback and movement. A series of videos may be recorded at predetermined time intervals to capture visually trending information. | 10-15-2015 |
20150298317 | Interfacing With A Mobile Telepresence Robot - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 10-22-2015 |
20150314449 | INTERFACING WITH A MOBILE TELEPRESENCE ROBOT - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 11-05-2015 |
20160046021 | INTERFACING WITH A MOBILE TELEPRESENCE ROBOT - A telepresence robot may include a drive system, a control system, an imaging system, and a mapping module. The mapping module may access a plan view map of an area and tags associated with the area. In various embodiments, each tag may include tag coordinates and tag information, which may include a tag annotation. A tag identification system may identify tags within a predetermined range of the current position and the control system may execute an action based on an identified tag whose tag information comprises a telepresence robot action modifier. The telepresence robot may rotate an upper portion independent from a lower portion. A remote terminal may allow an operator to control the telepresence robot using any combination of control methods, including by selecting a destination in a live video feed, by selecting a destination on a plan view map, or by using a joystick or other peripheral device. | 02-18-2016 |
Patent application number | Description | Published |
20100312747 | Computer Systems and Methods for Visualizing Data - A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification. | 12-09-2010 |
20110131250 | Computer Systems and Methods for the Query and Visualization of Multidimensional Databases - In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name. | 06-02-2011 |
20120117453 | Computer Systems and Methods for Automatically Viewing Multidimensional Databases - A method for automatically forming the clearest and most useful visual plot for a given dataset of tuples. A best view type is selected for a view that includes a subsequently added new field. The visual plot is populated with the data in the view and then automatically rendered for the user. A dataset that is retrieved from a storage is analyzed to identify all the data types found in the dataset, and to determine the best view type to assign to the dataset's views. The visual plot is then populated with the data according to this best view type, and is automatically rendered for the user. | 05-10-2012 |
20120179713 | Computer Systems and Methods for the Query and Visualization of Multidimensional Databases - In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name. | 07-12-2012 |
20130246484 | Systems and Methods for Displaying Data in Split Dimension Levels - Systems and methods for displaying data in split dimension levels are disclosed. In some implementations, a method includes: at a computer, obtaining a dimensional hierarchy associated with a dataset, wherein the dimensional hierarchy includes at least one dimension and a sub-dimension of the at least one dimension; and populating information representing data included in the dataset into a visual table having a first axis and a second axis, wherein the first axis corresponds to the at least one dimension and the second axis corresponds to the sub-dimension of the at least one dimension. | 09-19-2013 |
20150081692 | COMPUTER SYSTEMS AND METHODS FOR AUTOMATICALLY VIEWING MULTIDIMENSIONAL DATABASES - The implementations described herein include methods and systems for displaying graphical representations of datasets. A method is performed at a computer having one or more processors and memory storing programs for execution by the processors. The method receives a request from a user to display a graphical representation of a dataset. In response to the request, the method identifies a plurality of alternative graphical representations of the dataset. Each alternative graphical representation has a respective associated view type. The method ranks the plurality of alternative graphical representations in accordance with a rating system. The rating system is based on a set of criteria, which includes at least one user-specific criterion. The method selects for display a resulting graphical representation from among the plurality of alternative graphical representations based on the ranking. | 03-19-2015 |
20150156402 | IMAGING ARRANGEMENTS AND METHODS THEREFOR - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 06-04-2015 |
20150242446 | COMPUTER SYSTEMS AND METHODS FOR THE QUERY AND VISUALIZATION OF MULTIDIMENSIONAL DATABASES - A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields. | 08-27-2015 |
Patent application number | Description | Published |
20090128669 | CORRECTION OF OPTICAL ABERRATIONS - Digital images are computed using an approach for correcting lens aberration. According to an example embodiment of the present invention, a digital imaging arrangement implements microlenses to direct light to photosensors that detect the light and generate data corresponding to the detected light. The generated data is used to compute an output image, where each output image pixel value corresponds to a selective weighting and summation of a subset of the detected photosensor values. The weighting is a function of characteristics of the imaging arrangement. In some applications, the weighting reduces the contribution of data from photosensors that contribute higher amounts of optical aberration to the corresponding output image pixel. | 05-21-2009 |
20100026852 | VARIABLE IMAGING ARRANGEMENTS AND METHODS THEREFOR - Various approaches to imaging involve selecting directional and spatial resolution. According to an example embodiment, images are computed using an imaging arrangement to facilitate selective directional and spatial aspects of the detection and processing of light data. Light passed through a main lens is directed to photosensors via a plurality of microlenses. The separation between the microlenses and photosensors is set to facilitate directional and/or spatial resolution in recorded light data, and facilitating refocusing power and/or image resolution in images computed from the recorded light data. In one implementation, the separation is varied between zero and one focal length of the microlenses to respectively facilitate spatial and directional resolution (with increasing directional resolution, hence refocusing power, as the separation approaches one focal length). | 02-04-2010 |
20110302110 | Computer Systems and Methods for Automatic Generation of Models for a Dataset - A method of automatically generating models from a dataset includes multiple steps. First, a description of a view of a dataset is provided. The description includes multiple fields associated with the dataset. Next, a set of properties is determined for each of the multiple fields. Finally, the description is automatically translated into one or more models based on the respective properties of the multiple fields and a set of predefined heuristics. | 12-08-2011 |
20120019711 | IMAGING ARRANGEMENTS AND METHODS THEREFOR - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 01-26-2012 |
20120019712 | IMAGING ARRANGEMENTS AND METHODS THEREFOR - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 01-26-2012 |
20120229682 | Correction of Optical Abberations - Digital images are computed using an approach for correcting lens aberration. According to an example embodiment of the present invention, a digital imaging arrangement implements microlenses to direct light to photosensors that detect the light and generate data corresponding to the detected light. The generated data is used to compute an output image, where each output image pixel value corresponds to a selective weighting and summation of a subset of the detected photosensor values. The weighting is a function of characteristics of the imaging arrangement. In some applications, the weighting reduces the contribution of data from photosensors that contribute higher amounts of optical aberration to the corresponding output image pixel. | 09-13-2012 |
20120300097 | VARIABLE IMAGING ARRANGEMENTS AND METHODS THEREFOR - Various approaches to imaging involve selecting directional and spatial resolution. According to an example embodiment, images are computed using an imaging arrangement to facilitate selective directional and spatial aspects of the detection and processing of light data. Light passed through a main lens is directed to photosensors via a plurality of microlenses. The separation between the microlenses and photosensors is set to facilitate directional and/or spatial resolution in recorded light data, and facilitating refocusing power and/or image resolution in images computed from the recorded light data. In one implementation, the separation is varied between zero and one focal length of the microlenses to respectively facilitate spatial and directional resolution (with increasing directional resolution, hence refocusing power, as the separation approaches one focal length). | 11-29-2012 |
20130033626 | VARIABLE IMAGING ARRANGEMENTS AND METHODS THEREFOR - Various approaches to imaging involve selecting directional and spatial resolution. According to an example embodiment, images are computed using an imaging arrangement to facilitate selective directional and spatial aspects of the detection and processing of light data. Light passed through a main lens is directed to photosensors via a plurality of microlenses. The separation between the microlenses and photosensors is set to facilitate directional and/or spatial resolution in recorded light data, and facilitating refocusing power and/or image resolution in images computed from the recorded light data. In one implementation, the separation is varied between zero and one focal length of the microlenses to respectively facilitate spatial and directional resolution (with increasing directional resolution, hence refocusing power, as the separation approaches one focal length). | 02-07-2013 |
20130107085 | Correction of Optical Aberrations | 05-02-2013 |
20130169855 | Imaging Arrangements and Methods Therefor - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 07-04-2013 |
20140028892 | IMAGING ARRANGEMENTS AND METHODS THEREFOR - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 01-30-2014 |
20140049663 | IMAGING ARRANGEMENTS AND METHODS THEREFOR - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 02-20-2014 |
20140204184 | VARIABLE IMAGING ARRANGEMENTS AND METHODS THEREFOR - Various approaches to imaging involve selecting directional and spatial resolution. According to an example embodiment, images are computed using an imaging arrangement to facilitate selective directional and spatial aspects of the detection and processing of light data. Light passed through a main lens is directed to photosensors via a plurality of microlenses. The separation between the microlenses and photosensors is set to facilitate directional and/or spatial resolution in recorded light data, and facilitating refocusing power and/or image resolution in images computed from the recorded light data. In one implementation, the separation is varied between zero and one focal length of the microlenses to respectively facilitate spatial and directional resolution (with increasing directional resolution, hence refocusing power, as the separation approaches one focal length). | 07-24-2014 |
20150029388 | IMAGING ARRANGEMENTS AND METHODS THEREFOR - Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal, plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected. | 01-29-2015 |
20150032429 | Systems and Methods for Generating Models of a Dataset for a Data Visualization - A method of generating a statistical model for a dataset operates at a computer system having one or more processors and memory. The memory stores one or more programs configured for execution by the one or more processors. The process receives a visual specification. The visual specification defines a graphical representation of a portion of the dataset. The visual specification includes a first field and a second field of the dataset. The method determines a set of data properties for each of the first and second fields. The process then generates a statistical model of a mathematical relationship between the first and second fields based on the data properties of the first and second fields and data values associated with the first and second fields in the dataset. The process displays the graphical representation and the statistical model superimposed on the graphical representation. | 01-29-2015 |
20150326848 | VARIABLE IMAGING ARRANGEMENTS AND METHODS THEREFOR - Various approaches to imaging involve selecting directional and spatial resolution. According to an example embodiment, images are computed using an imaging arrangement to facilitate selective directional and spatial aspects of the detection and processing of light data. Light passed through a main lens is directed to photosensors via a plurality of microlenses. The separation between the microlenses and photosensors is set to facilitate directional and/or spatial resolution in recorded light data, and facilitating refocusing power and/or image resolution in images computed from the recorded light data. In one implementation, the separation is varied between zero and one focal length of the microlenses to respectively facilitate spatial and directional resolution (with increasing directional resolution, hence refocusing power, as the separation approaches one focal length). | 11-12-2015 |