Patent application number | Description | Published |
20090324005 | Script Detection Service - Script detection service techniques are described. In an implementation, a determination is made as to which human writing system is associated with individual text characters in a string of one or more text characters based on values representing the individual text characters in the string. A particular human writing system is designated as associated with the string based on the values associated with the individual text characters in the string. | 12-31-2009 |
20090326918 | Language Detection Service - Language detection techniques are described. In implementation, a method comprises determining which human writing system is associated with text characters in a string based on values representing the text characters. When the values are associated with more than one human language, the string is compared with a targeted dictionary to identify a corresponding human language associated with the string. | 12-31-2009 |
20090326920 | Linguistic Service Platform - Linguistic service platform techniques are described. In implementations, one or more computer-readable media comprise instructions that are executable by a computer to designate a linguistic service having a particular property responsive to an application program interface call specifying the property. Communication may be brokered between the linguistic service and the application so that communication occurs without the application directly communicating with the linguistic service. | 12-31-2009 |
20090327860 | Map Service - Map service techniques are described. In an implementation, one or more computer-readable media comprise instructions that are executable by a computer to recognize from text an action that is performable by a particular one of a plurality of webpages and parse a set of parameters from the text to be passed to the particular said webpage to cause the webpage to perform the action. | 12-31-2009 |
20120029906 | Language Detection Service - Language detection techniques are described. In implementation, a method comprises determining which human writing system is associated with text characters in a string based on values representing the text characters. When the values are associated with more than one human language, the string is compared with a targeted dictionary to identify a corresponding human language associated with the string. Linguistic services are designated to be available based on service properties of the linguistic services and based on the corresponding human language associated with the string. | 02-02-2012 |
20120059646 | Script Detection Service - Script detection service techniques are described. In an implementation, values representing individual text characters in a string of one or more text characters are identified to determine which human writing system is associated with the individual text characters. The values are compared to a table that associates subsets of values with individual human writing systems. The values are determined to be within a particular subset of values in the table that correspond to a particular human writing system. A particular human writing system is designated as associated with the string based on the values associated with the individual text characters in the string being within the particular subset of values that corresponds with the particular human writing system. | 03-08-2012 |
20120254712 | Map Service - Map service techniques are described. In an implementation, text is received from an application for processing by one or more linguistic services. Based on service properties of respective linguistic services that are relevant to the application, particular linguistic services are designated to be available for use by the application and one or more other linguistic services are obscured from the application. A communication is formed to communication the text to a designated linguistic service. | 10-04-2012 |
20130297295 | Script Detection Service - Script detection service techniques are described. In an implementation, a corpora of text is analyzed to determine which strings in the corpora of text are to be included in a targeted dictionary that is usable for language detection services. The targeted dictionary is populated with strings that are individually associated with a human language. The strings include individual text characters associated with values that correspond to a particular subset of values in a table that associates subsets of values with individual human writing systems. | 11-07-2013 |
Patent application number | Description | Published |
20100031443 | PATIENT TRANSPORT APPARATUS WITH INTEGRATED SUBSYSTEMS FOR USE IN MAGNETIC RESONANCE IMAGING - A patient transport apparatus having a table configured to support a patient, a base attached to the table, a docking system attached to the base, the docking system configured to couple to a mating docking system of an MR imaging system, and a plurality of bays formed in the base, with each bay configured to receive a patient care module therein. The patient transport apparatus further includes a control system configured to be electrically coupled to each patient care module received within the plurality of bays and configured to centrally control each patient care module. | 02-11-2010 |
20110029325 | METHODS AND APPARATUS TO ENHANCE HEALTHCARE INFORMATION ANALYSES - Methods and apparatus to enhance healthcare information analyses are disclosed herein. An example method for use with a healthcare information system includes automatically detecting a scheduled analysis of healthcare information associated with a patient based on a detection of the patient; retrieving textual data corresponding to a medical history of the patient from an information source according to a first configuration setting, wherein the first configuration setting controls which of a plurality of aspects of the medical history is to be retrieved; generating a report using the textual data according to a second configuration setting, wherein the second configuration setting controls an organization of the generated report; converting the report to an audio file and storing the audio file in memory; and in response to detecting an initiation of the scheduled analysis, outputting at least a first segment of the audio file on a presentation device associated with the scheduled analysis in conjunction with a presentation of one or more images associated with the scheduled analysis. | 02-03-2011 |
20110150706 | METHOD AND APPARATUS FOR GENERATING HYPERPOLARIZED MATERIALS - Methods and apparatuses for generating hyperpolarized materials are disclosed. In one embodiment, a flexible fluid path is provided for use in a polarizer system. In a further embodiment, a polarizer system is provided with an electromechanical assembly for controlling the movement of a fluid path, when present, within a sample path of the polarizer system. In a further embodiment, a polarizer system is provided having a sample path entry point at a convenient height for use by a user standing on the ground. | 06-23-2011 |
20110310126 | METHOD AND SYSTEM FOR INTERACTING WITH DATASETS FOR DISPLAY - Methods and systems for interacting with datasets for display are provided. One method includes displaying information on a display having a surface viewable by a user and receiving a user input at a surface of a multi-touch sensitive device. The surface of the multi-touch sensitive device is a different surface than the surface of the display viewable by the user. The method further includes manipulating the displayed information in response to the received user input. | 12-22-2011 |
20120059664 | SYSTEM AND METHOD FOR MANAGEMENT OF PERSONAL HEALTH AND WELLNESS - A system and method for management of personal health and wellness is disclosed. The system receives an input for generating a user profile for a user, with the user profile having an initial health status goal associated therewith including target physiological parameter levels, nutritional uptake, and physical fitness activity. Physiological data, nutritional data, and physical fitness data are received by the system over a period of time on measured physiological parameters, dietary consumption, and physical activity of the user, with the system then analyzing and comparing the received data to the physiological parameters, nutritional uptake, and physical fitness activity associated with the initial health status goal. The system communicates to the user a personalized health and wellness status update that is based on the analysis of the received data and the comparison of such data to the physiological, nutritional, and physical activity metrics associated with the initial health status goal. | 03-08-2012 |
20120123219 | METHOD AND SYSTEM FOR CONTROLLING MEDICAL MONITORING EQUIPMENT - A device for monitoring physiological parameters of a medical patient includes a pneumatic system configured to be coupled to a patient to provide a regulated gas thereto, a computer coupled to the pneumatic system and configured to regulate gas to the patient via the pneumatic system, and a touchscreen monitor coupled to the computer. The touchscreen monitor includes a first graphical user interface (GUI) having a first display, and a second GUI having a second display different from the first display and configured having interaction fields to enable parameters to be input therewith. The device includes a first trigger configured to switch at least from the first GUI to the second GUI. | 05-17-2012 |
20120179038 | ULTRASOUND BASED FREEHAND INVASIVE DEVICE POSITIONING SYSTEM AND METHOD - In one embodiment, an interventional guidance method includes generating an ultrasound image of a subject anatomy of interest. The method also includes superimposing on the ultrasound image a visual indication of at least one of projection of a position of an interventional device, trajectory of the interventional device, and a location at which the interventional device will intercept an ultrasound imaging plane. The interventional guidance method also includes dynamically altering an aspect of the superimposed visual indication during a interventional procedure. The dynamic altering includes altering a dynamic indication of a trajectory of the interventional instrument transverse to the imaging plane or an interception location of the trajectory of the interventional instrument with the imaging plane. | 07-12-2012 |
20120197131 | PROBE-MOUNTED ULTRASOUND SYSTEM CONTROL INTERFACE - In one embodiment, an ultrasound probe is provided including a probe body with a sensing face arranged to be held in contact with a subject by a user. The probe also provides a tactile interface located on a side of the probe body sufficiently close to the sensing face so that a user can depress the tactile interface with a finger while the probe is grasped by the user via the thumb of the same hand and at least two other fingers of the same hand rest in contact with the subject. The probe includes at least one switch which is activated by depression of the tactile interface. | 08-02-2012 |
20130176230 | SYSTEMS AND METHODS FOR WIRELESSLY CONTROLLING MEDICAL DEVICES - Systems and methods for wirelessly controlling medical devices are provided. One system includes a portable user interface having a housing and a communication module within the housing configured to wirelessly communicate with at least one medical device. The portable user interface also includes a display displaying a graphical user interface to control the at least one medical device remotely, wherein the displayed graphical user interface corresponds to a control interface of the at least one medical device. | 07-11-2013 |
20130238314 | METHODS AND SYSTEMS FOR PROVIDING AUDITORY MESSAGES FOR MEDICAL DEVICES - Methods and systems for providing auditory messages for medical devices are provided. One method includes receiving semantic rating scale data corresponding to a plurality of sounds and medical message descriptions and performing semantic mapping using the received semantic rating scale data. The method also includes determining profiles for audible medical messages based on the semantic mapping and generating audible medical messages based on the determined profiles. | 09-12-2013 |
20140111335 | METHODS AND SYSTEMS FOR PROVIDING AUDITORY MESSAGES FOR MEDICAL DEVICES - Methods and systems for providing auditory messages for medical devices are provided. One system includes at least one medical device configured to generate a plurality of medical messages and a processor in the at least one medical device configured to generate an auditory signal corresponding to one of the plurality of medical messages. The auditory signal is configured based on a functional relationship linking psychological sound perceptions in a clinical environment to acoustic and musical sound variables. | 04-24-2014 |
Patent application number | Description | Published |
20140164923 | Intelligent Adaptive Content Canvas - Various embodiments provide an intelligent adaptive content canvas that can enable users to access content, such as photos and videos, and consume the content in an adaptive environment that tailors the user experience in accordance with various parameters. The user experience is personalized to the user and is adaptively predictive in a manner that attempts to surface content that the user would likely wish to consume. | 06-12-2014 |
20140164985 | Predictive Directional Content Queue - Various embodiments provide an intelligent adaptive content canvas that can enable users to access content, such as photos and videos, and consume the content in an adaptive environment that tailors the user experience in accordance with various parameters. The user experience is personalized to the user and is adaptively predictive in a manner that attempts to surface content that the user would likely wish to consume. | 06-12-2014 |
20140165001 | Adaptive Presentation of Content Based on User Action - Various embodiments provide an intelligent adaptive content canvas that can enable users to access content, such as photos and videos, and consume the content in an adaptive environment that tailors the user experience in accordance with various parameters. The user experience is personalized to the user and is adaptively predictive in a manner that attempts to surface content that the user would likely wish to consume. | 06-12-2014 |
20140207937 | Determination of Internet Access - Internet access or connectivity is determined by sending a request to a third-party service to which connectivity is desired with an application on a client computing device and responsive to receiving a response, attempting to rule out a false positive response from an entity other than the third-party service. | 07-24-2014 |
20150054853 | SYSTEMS AND METHODS OF AUTOMATIC IMAGE SIZING - Systems and methods of automatic image sizing are provided. An image is provided in a first frame within a first layout. A request to display the image in a second frame of a second layout is received, where the second frame is different than the first frame. Region data associated with the image is accessed. The region data corresponds to a prior edit to the image and indicates a portion of the image to be displayed in the second frame. The image is provided in the second frame using the region data such that the portion of the image is displayed in the second frame. | 02-26-2015 |
20150058708 | SYSTEMS AND METHODS OF CHARACTER DIALOG GENERATION - Systems and methods of character dialog generation are provided. A face location for a person displayed within an image is detected. Metadata associated with the image is determined, where the metadata is specific to one or more characteristics of the image. A template relevant to the metadata is accessed, and the template and metadata are used to generate text. A display object with the text is provided, where the display object is displayed on the image over at least a portion of the face location detected. | 02-26-2015 |
20150067482 | METHOD FOR LAYOUT OF SPEECH BUBBLES ASSOCIATED WITH CHARACTERS IN AN IMAGE - A system and method of speech bubbles layout are described. A context module determines geometric constraints of speech bubbles for characters in an image and features of the characters in the image, receives a speech content for one or more characters, and identifies a conversation order of the characters. A layout module generates a layout of the speech bubbles based on the features of the characters, the speech content, and the conversation order. The layout of the speech bubbles is within the geometric constraints of the speech bubbles in the image. | 03-05-2015 |
Patent application number | Description | Published |
20140153715 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO ENCODE AUXILARY DATA INTO TEXT DATA AND METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO OBTAIN ENCODED DATA FROM TEXT DATA - Methods, apparatus, and articles of manufacture to encode auxiliary data into text data and methods, apparatus, and articles of manufacture to obtain encoded data from text data are disclosed. An example method to embed auxiliary data into text data includes assigning source data to one of a plurality of groups, the source data comprising text data, identifying a symbol to be added to the source data based on an assigned group of the source data, and generating encoded data by including in the source data a text character representative of the symbol. | 06-05-2014 |
20140157439 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO ENCODE AUXILARY DATA INTO RELATIONAL DATABASE KEYS AND METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO OBTAIN ENCODED DATA FROM RELATIONAL DATABASE KEYS - Methods, apparatus, and articles of manufacture to encode auxiliary data into relational database keys and methods, apparatus, and articles of manufacture to obtain encoded data from relational database keys are disclosed. An example method to encode auxiliary data into relational data includes generating a code comprising a plurality of groups and representative of auxiliary data, determining incremental values for the plurality of groups, generating a first key based on the code, and generating a subsequent key by modifying the first key based on the value of the first key and the incremental values. | 06-05-2014 |
20140157440 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO ENCODE AUXILIARY DATA INTO NUMERIC DATA AND METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO OBTAIN ENCODED DATA FROM NUMERIC DATA - Methods, apparatus, and articles of manufacture to encode auxiliary data into numeric data and methods, apparatus, and articles of manufacture to obtain encoded data from numeric data are disclosed. An example method to embed auxiliary information into numeric data includes assigning source data to one of a plurality of groups, the source data comprising a numeric value, identifying a symbol to be added to the source data based on an assigned group of the source data, and generating encoded data by selectively modifying the numeric value of the source data to be representative of the symbol. | 06-05-2014 |
20140157441 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO ENCODE AUXILARY DATA INTO TEXT DATA AND METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO OBTAIN ENCODED DATA FROM TEXT DATA - Methods, apparatus, and articles of manufacture to encode auxiliary data into text data and methods, apparatus, and articles of manufacture to obtain encoded data from text data are disclosed. An example method to embed auxiliary data into text data includes selecting a portion of auxiliary data to be encoded into text data, mapping the portion of auxiliary data to a first set of one or more encoded characters representative of the portion of the auxiliary data, mapping a position of the portion of auxiliary data within the auxiliary data to a second set of one or more encoded characters representative of the portion of the auxiliary data, and generating encoded data by including the first set of encoded characters and the second set of encoded characters in the text data. | 06-05-2014 |
20150236854 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO ENCODE AUXILIARY DATA INTO TEXT DATA AND METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO OBTAIN ENCODED DATA FROM TEXT DATA - Methods, apparatus, and articles of manufacture to encode auxiliary data into text data and methods, apparatus, and articles of manufacture to obtain encoded data from text data are disclosed. An example method includes assigning encoded data units to respective ones of a plurality of groups, the encoded data units including text data, identifying a symbol present in a first one of the encoded data units assigned to a first one of the groups, and outputting auxiliary data embedded in the encoded data units based on the symbol and based on the one of the groups of the first one of the encoded data units. | 08-20-2015 |
20150261943 | METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO ENCODE AUXILIARY DATA INTO TEXT DATA AND METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO OBTAIN ENCODED DATA FROM TEXT DATA - Methods, apparatus, and articles of manufacture to encode auxiliary data into text data and methods, apparatus, and articles of manufacture to obtain encoded data from text data are disclosed. An example method includes detecting, using a processor, a first symbol present in first text data, the first symbol including a white space character; mapping the first symbol to first data using the processor; detecting, using the processor, a second symbol present in the first text data by detecting a white space character or a flow control character; mapping, using the processor, the second symbol to a first bit position of the first data in encoded data; and determining the encoded data, using the processor, based on placing the first data in the first bit position. | 09-17-2015 |
Patent application number | Description | Published |
20150212689 | LEARNING USER INTERFACE - Provided herein are method, apparatus, and computer program products for facilitating a learning user interface. The interface may be presented as a plurality of dynamic icons representing a plurality of items. The interface may further be facilitated by accessing, by a processor, business data corresponding to the plurality of items. The interface may be facilitated by determining, by the processor, a visual bias for at least one of the plurality of dynamic icons based on the business data corresponding to the plurality of items and may be facilitated by applying, via the interface, the visual bias to the at least one of the plurality of dynamic icons. | 07-30-2015 |
20150213357 | LEARNING USER INTERFACE - Provided herein are method, apparatus, and computer program products for facilitating a learning user interface. The interface may be presented as a plurality of dynamic icons representing a plurality of items. The interface may be facilitated by receiving, by a processor, a selection indication associated with one item of the plurality of dynamic icons. The interface may be facilitated by determining, via the processor, at least one suggested item of the plurality of items based on the selection indication. The interface may also be facilitated by determining a visual bias for at least one suggested dynamic icon representing the at least one suggested item relative to at least one secondary dynamic icon and may be facilitated by applying the visual bias, via the interface, to the at least one suggested dynamic icon. | 07-30-2015 |
20150213545 | LEARNING USER INTERFACE - Provided herein are method, apparatus, and computer program products for facilitating a learning user interface. The interface may be presented as a plurality of dynamic icons representing a plurality of items. The plurality of dynamic icons may include at least one suggested dynamic icon representing at least one suggested item of the plurality of items and at least one secondary dynamic icon representing a secondary item of the plurality of items. The interface may be facilitated by determining, via a processor, a visual bias for the at least one suggested dynamic icon relative to the at least one secondary dynamic icon. The interface may be facilitated by applying the visual bias, via the interface, to the at least one suggested dynamic icon. | 07-30-2015 |
20150213546 | LEARNING USER INTERFACE - Provided herein are method, apparatus, and computer program products for facilitating a learning user interface. The interface may be presented as a plurality of dynamic icons representing a plurality of items. The plurality of dynamic icons may include at least one suggested dynamic icon representing at least one suggested item of the plurality of items and at least one unsuggested or secondary dynamic icon representing an unsuggested or secondary item of the plurality of items. The interface may be facilitated by applying a visual bias for the at least one suggested dynamic icon relative to the at least one unsuggested or secondary dynamic icon. | 07-30-2015 |
20150213547 | LEARNING USER INTERFACE - Provided herein are method, apparatus, and computer program products for facilitating a learning user interface. The interface may be presented as a plurality of dynamic icons representing a plurality of items. The interface may further be facilitated by receiving a profile identifier and by accessing, via a processor, profile data associated with the profile identifier. The interface may be facilitated by determining, via the processor, a visual bias for at least one of the dynamic icons relative to another of the dynamic icons based on the profile data and may be facilitated by applying the visual bias, via the interface, to the at least one of the dynamic icons. Multiple interfaces may be applied to the same or different screens. | 07-30-2015 |
Patent application number | Description | Published |
20090041381 | Method and Apparatus for Radiance Processing by Demultiplexing in the Frequency Domain - Method and apparatus for radiance processing by demultiplexing in the frequency domain. A frequency domain demultiplexing module obtains a radiance image captured with a lens-based radiance camera. The image includes optically mixed spatial and angular frequency components of light from a scene. The module performs frequency domain demultiplexing on the radiance image to generate multiple parallax views of the scene. The method may extract multiple slices at different angular frequencies from a Fourier transform of the radiance image, apply a Fourier transform to each of the multiple slices to generate intermediate images, stack the intermediate images to form a 3- or 4-dimensional image, apply an inverse Fourier transform along angular dimension(s) of the 3- or 4-dimensional image, and unstack the transformed 3- or 4-dimensional image to obtain the multiple parallax views. During the method, phase correction may be performed to determine the centers of the intermediate images. | 02-12-2009 |
20090041448 | Method and Apparatus for Radiance Capture by Multiplexing in the Frequency Domain - An external mask-based radiance camera may be based on an external, non-refractive mask located in front of the main camera lens. The mask modulates, but does not refract, light. The camera multiplexes radiance in the frequency domain by optically mixing different spatial and angular frequency components of light. The mask may, for example, be a mesh of opaque linear elements, which collectively form a grid, an opaque medium with transparent openings, such as circles, or a pinhole mask. Other types of masks may be used. Light may be modulated by the mask and received at the main lens of a camera. The main lens may be focused on a plane between the mask and the main lens. The received light is refracted by the main lens onto a photosensor of the camera. The photosensor may capture the received light to generate a radiance image of the scene. | 02-12-2009 |
20090102956 | Fast Computational Camera Based On Two Arrays of Lenses - Method and apparatus for a fast (low F/number) computational camera that incorporates two arrays of lenses. The arrays include a lenslet array in front of a photosensor and an objective lens array of two or more lenses. Each lens in the objective lens array captures light from a subject. Each lenslet in the lenslet array captures light from each objective lens and separates the captured light to project microimages corresponding to the objective lenses on a region of the photosensor under the lenslet. Thus, a plurality of microimages are projected onto and captured by the photosensor. The captured microimages may be processed in accordance with the geometry of the objective lenses to align the microimages to generate a final image. One or more other algorithms may be applied to the image data in accordance with radiance information captured by the camera, such as automatic refocusing of an out-of-focus image. | 04-23-2009 |
20090185801 | Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering - Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than f from the photosensor, where f is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to light-fields captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods. | 07-23-2009 |
20090268970 | Method and Apparatus for Block-Based Compression of Light-field Images - A method and apparatus for the block-based compression of light-field images. Light-field images may be preprocessed by a preprocessing module into a format that is compatible with the blocking scheme of a block-based compression technique, for example JPEG. The compression technique is then used to compress the preprocessed light-field images. The light-field preprocessing module reshapes the angular data in a captured light-field image into shapes compatible with the blocking scheme of the compression technique so that blocking artifacts of block-based compression are not introduced in the final compressed image. Embodiments may produce compressed 2D images for which no specific light-field image viewer is needed to preview the full light-field image. Full light-field information is contained in one compressed 2D image. | 10-29-2009 |
20090295829 | Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering - Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than f from the photosensor, where f is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to flats captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods. | 12-03-2009 |
20100020187 | PLENOPTIC CAMERA - One embodiment of the present invention provides a plenoptic camera which captures information about the direction distribution of light rays entering the camera. Like a conventional camera, this plenoptic camera includes a main lens which receives light from objects in an object field and directs the received light onto an image plane of the camera. It also includes a photodetector array located at the image plane of the camera, which captures the received light to produce an image. However, unlike a conventional camera, the plenoptic camera additionally includes an array of optical elements located between the object field and the main lens. Each optical element in this array receives light from the object field from a different angle than the other optical elements in the array, and consequently directs a different view of the object field into the main lens. In this way, the photodetector array receives a different view of the object field from each optical element in the array. | 01-28-2010 |
20110211824 | Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering - Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than f from the photosensor, where f is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to light-fields captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods. | 09-01-2011 |
20110305447 | METHOD AND APPARATUS FOR RADIANCE CAPTURE BY MULTIPLEXING IN THE FREQUENCY DOMAIN - An external mask-based radiance camera may be based on an external, non-refractive mask located in front of the main camera lens. The mask modulates, but does not refract, light. The camera multiplexes radiance in the frequency domain by optically mixing different spatial and angular frequency components of light. The mask may, for example, be a mesh of opaque linear elements, which collectively form a grid, an opaque medium with transparent openings, such as circles, or a pinhole mask. Other types of masks may be used. Light may be modulated by the mask and received at the main lens of a camera. The main lens may be focused on a plane between the mask and the main lens. The received light is refracted by the main lens onto a photosensor of the camera. The photosensor may capture the received light to generate a radiance image of the scene. | 12-15-2011 |
20120177356 | Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering - Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than ƒ from the photosensor, where ƒ is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to light-fields captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods. | 07-12-2012 |
20120183232 | Method and Apparatus for Block-Based Compression of Light-field Images - A method and apparatus for the block-based compression of light-field images. Light-field images may be preprocessed by a preprocessing module into a format that is compatible with the blocking scheme of a block-based compression technique, for example JPEG. The compression technique is then used to compress the preprocessed light-field images. The light-field preprocessing module reshapes the angular data in a captured light-field image into shapes compatible with the blocking scheme of the compression technique so that blocking artifacts of block-based compression are not introduced in the final compressed image. Embodiments may produce compressed 2D images for which no specific light-field image viewer is needed to preview the full light-field image. Full light-field information is contained in one compressed 2D image. | 07-19-2012 |
20120229679 | Methods and Apparatus for Full-Resolution Light-Field Capture and Rendering - Method and apparatus for full-resolution light-field capture and rendering. A radiance camera is described in which the microlenses in a microlens array are focused on the image plane of the main lens instead of on the main lens, as in conventional plenoptic cameras. The microlens array may be located at distances greater than f from the photosensor, where f is the focal length of the microlenses. Radiance cameras in which the distance of the microlens array from the photosensor is adjustable, and in which other characteristics of the camera are adjustable, are described. Digital and film embodiments of the radiance camera are described. A full-resolution light-field rendering method may be applied to flats captured by a radiance camera to render higher-resolution output images than are possible with conventional plenoptic cameras and rendering methods. | 09-13-2012 |
20120281072 | Focused Plenoptic Camera Employing Different Apertures or Filtering at Different Microlenses - Methods and apparatus for capturing and rendering images with focused plenoptic cameras employing different filtering at different microlenses. In a focused plenoptic camera, the main lens creates an image at the focal plane. That image is re-imaged on the sensor multiple times by an array of microlenses. Different filters that provide different levels and/or types of filtering may be combined with different ones of the microlenses. A flat captured with the camera includes multiple microimages captured according to the different filters. Multiple images may be assembled from the microimages, with each image assembled from microimages captured using a different filter. A final image may be generated by appropriately combining the images assembled from the microimages. Alternatively, a final image, or multiple images, may be assembled from the microimages by first combining the microimages and then assembling the combined microimages to produce one or more output images. | 11-08-2012 |
20130120356 | Methods, Apparatus, and Computer-Readable Storage Media for Depth-Based Rendering of Focused Plenoptic Camera Data - Methods, apparatus, and computer-readable storage media for rendering focused plenoptic camera data. A depth-based rendering technique is described that estimates depth at each microimage and then applies that depth to determine a position in the input flat from which to read a value to be assigned to a given point in the output image. The techniques may be implemented according to parallel processing technology that renders multiple points of the output image in parallel. In at least some embodiments, the parallel processing technology is graphical processing unit (GPU) technology. | 05-16-2013 |
20130120605 | Methods, Apparatus, and Computer-Readable Storage Media for Blended Rendering of Focused Plenoptic Camera Data - Methods, apparatus, and computer-readable storage media for rendering focused plenoptic camera data. A rendering with blending technique is described that blends values from positions in multiple microimages and assigns the blended value to a given point in the output image. A rendering technique that combines depth-based rendering and rendering with blending is also described. Depth-based rendering estimates depth at each microimage and then applies that depth to determine a position in the input flat from which to read a value to be assigned to a given point in the output image. The techniques may be implemented according to parallel processing technology that renders multiple points of the output image in parallel. In at least some embodiments, the parallel processing technology is graphical processing unit (GPU) technology. | 05-16-2013 |
20130121615 | Method and Apparatus for Managing Artifacts in Frequency Domain Processing of Light-Field Images - Various methods and apparatus for removing artifacts in frequency domain processing of light-field images are described. Methods for the reduction or removal of the artifacts are described that include methods that may be applied during frequency domain processing and a method that may be applied during post-processing of resultant angular views. The methods may be implemented in software as or in a light-field frequency domain processing module. The described methods include an oversampling method to determine the correct centers of slices, a phase multiplication method to determine the correct centers of slices, a method to exclude low-energy slices, and a cosmetic correction method. | 05-16-2013 |
20130127901 | Methods and Apparatus for Calibrating Focused Plenoptic Camera Data - Methods, apparatus, and computer-readable storage media for calibrating focused plenoptic camera data. A calibration technique that does not modify the image data may be applied to raw plenoptic images. Calibration parameters, including but not limited to tilt angle, corner crops, main lens distance from the microlens array, sensor distance from the microlens array, and microimage size, may be specified. Calibration may include scaling down the input texture coordinates for the plenoptic image so that the new coordinate range fits the size of the texture with crops taken into account. These coordinates may be further transformed by one or more of a matrix performing a scaling, to correct for lens distortion; a rotation, to correct for tilts; and a translation that finalizes the necessary corner crops. A transformation matrix is generated that can be applied to the raw image by radiance processing techniques such as super-resolution techniques. | 05-23-2013 |
20130128030 | Thin Plenoptic Cameras Using Solid Immersion Lenses - Methods and apparatus for capturing and rendering high-quality photographs using relatively small, thin plenoptic cameras. Plenoptic camera technology, in particular focused plenoptic camera technology including but not limited to super-resolution techniques, and other technologies such as solid immersion lens (SIL) technology may be leveraged to provide thin form factor, megapixel resolution cameras suitable for use in mobile devices and other applications. In addition, at least some embodiments of these cameras may also capture radiance, allowing the imaging capabilities provided by plenoptic camera technology to be realized through appropriate rendering techniques. Hemispherical SIL technology, along with multiple main lenses and a mask on the photosensor, may be employed in some thin plenoptic cameras. Other thin cameras may include a layer between hemispherical SILs and the photosensor that effectively implements superhemispherical SIL technology in the camera. | 05-23-2013 |
20130128068 | Methods and Apparatus for Rendering Focused Plenoptic Camera Data using Super-Resolved Demosaicing - A super-resolved demosaicing technique for rendering focused plenoptic camera data performs simultaneous super-resolution and demosaicing. The technique renders a high-resolution output image from a plurality of separate microimages in an input image at a specified depth of focus. For each point on an image plane of the output image, the technique determines a line of projection through the microimages in optical phase space according to the current point and angle of projection determined from the depth of focus. For each microimage, the technique applies a kernel centered at a position on the current microimage intersected by the line of projection to accumulate, from pixels at each microimage covered by the kernel at the respective position, values for each color channel weighted according to the kernel. A value for a pixel at the current point in the output image is computed from the accumulated values for the color channels. | 05-23-2013 |
20130128069 | Methods and Apparatus for Rendering Output Images with Simulated Artistic Effects from Focused Plenoptic Camera Data - Methods, apparatus, and computer-readable storage media for simulating artistic effects in images rendered from plenoptic data. An impressionistic-style artistic effect may be generated in output images of a rendering process by an “impressionist” 4D filter applied to the microimages in a flat captured with focused plenoptic camera technology. Individual pixels are randomly selected from blocks of pixels in the microimages, and only the randomly selected pixels are used to render an output image. The randomly selected pixels are rendered to generate the artistic effect, such as an “impressionistic” effect, in the output image. A rendering technique is applied that samples pixel values from microimages using a thin sampling kernel, for example a thin Gaussian kernel, so that pixel values are sampled only from one or a few of the microimages. | 05-23-2013 |
20130128077 | Thin Plenoptic Cameras Using Microspheres - Methods and apparatus for capturing and rendering high-quality photographs using relatively small, thin plenoptic cameras. Plenoptic camera technology, in particular focused plenoptic camera technology including but not limited to super-resolution techniques, and other technologies such as microsphere technology may be leveraged to provide thin form factor, megapixel resolution cameras suitable for use in mobile devices and other applications. In addition, at least some embodiments of these cameras may also capture radiance, allowing the imaging capabilities provided by plenoptic camera technology to be realized through appropriate rendering techniques. | 05-23-2013 |
20130128081 | Methods and Apparatus for Reducing Plenoptic Camera Artifacts - Methods and apparatus for reducing plenoptic camera artifacts. A first method is based on careful design of the optical system of the focused plenoptic camera to reduce artifacts that result in differences in depth in the microimages. A second method is computational; a focused plenoptic camera rendering algorithm is provided that corrects for artifacts resulting from differences in depth in the microimages. While both the artifact-reducing focused plenoptic camera design and the artifact-reducing rendering algorithm work by themselves to reduce artifacts, the two approaches may be combined. | 05-23-2013 |
20130128087 | Methods and Apparatus for Super-Resolution in Integral Photography - Methods and apparatus for super-resolution in integral photography are described. Several techniques are described that, alone or in combination, may improve the super-resolution process and/or the quality of super-resolved images that may be generated from flats captured with a focused plenoptic camera using a super-resolution algorithm. At least some of these techniques involve modifications to the focused plenoptic camera design. In addition, at least some of these techniques involve modifications to the super-resolution rendering algorithm. The techniques may include techniques for reducing the size of pixels, techniques for shifting pixels relative to each other so that super-resolution is achievable at more or all depths of focus, and techniques for sampling using an appropriate filter or kernel. These techniques may, for example, reduce or eliminate the need to perform deconvolution on a super-resolved image, and may improve super-resolution results and/or increase performance. | 05-23-2013 |
Patent application number | Description | Published |
20150312549 | GENERATION AND USE OF A 3D RADON IMAGE - Certain aspects relate to systems and techniques for efficiently recording captured plenoptic image data and for rendering images from the captured plenoptic data. The plenoptic image data can be captured by a plenoptic or other light field camera. In some implementations, four dimensional radiance data can be transformed into three dimensional data by performing a Radon transform to define the image by planes instead of rays. A resulting Radon image can represent the summed values of energy over each plane. The original three-dimensional luminous density of the scene can be recovered, for example, by performing an inverse Radon transform. Images from different views and/or having different focus can be rendered from the luminous density. | 10-29-2015 |
20150370040 | FOLDED OPTIC ARRAY CAMERA USING REFRACTIVE PRISMS - Aspects relate to a prism array camera having a wide field of view. For example, the prism array camera can use a central refractive prism, for example with multiple surfaces or facets, to split incoming light comprising the target image into multiple portions for capture by the sensors in the array. The prism can have a refractive index of approximately 1.5 or higher, and can be shaped and positioned to reduce chromatic aberration artifacts and increase the FOV of a sensor. In some examples a negative lens can be incorporated into or attached to a camera-facing surface of the prism to further increase the FOV. | 12-24-2015 |
20150373252 | AUTOFOCUS FOR FOLDED OPTIC ARRAY CAMERAS - Aspects relate to autofocus systems and techniques for an array camera having a low-profile height, for example approximately 4 mm. A voice coil motor (VCM) can be positioned proximate to a folded optic assembly in the array camera to enable vertical motion of a second light directing surface for changing the focal position of the corresponding sensor. A driving member can be positioned within the coil of the VCM to provide vertical movement, and the driving member can be coupled to the second light directing surface, for example by a lever. Accordingly, the movement of the VCM driving member can be transferred to the second light directing surface across a distance, providing autofocus capabilities without increasing the overall height of the array camera. | 12-24-2015 |
20150373262 | MULTI-CAMERA SYSTEM USING FOLDED OPTICS FREE FROM PARALLAX AND TILT ARTIFACTS - Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror prism of the array camera can intersect at an apex defining the vertical axis of symmetry of the system. The apex can serve as a point of intersection for the optical axes of the sensors in the array. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays. | 12-24-2015 |
20150373263 | MULTI-CAMERA SYSTEM USING FOLDED OPTICS FREE FROM PARALLAX ARTIFACTS - Aspects relate to an array camera exhibiting little or no parallax artifacts in captured images. For example, the planes of the central mirror surfaces of the array camera can be located at a midpoint along, and orthogonally to, a line between the corresponding camera location and the virtual camera location. Accordingly, the cones of all of the cameras in the array appear as if coming from the virtual camera location after folding by the mirrors. Each sensor in the array “sees” a portion of the image scene using a corresponding facet of the central mirror prism, and accordingly each individual sensor/mirror pair represents only a sub-aperture of the total array camera. The complete array camera has a synthetic aperture generated based on the sum of all individual aperture rays. | 12-24-2015 |
20150373279 | WIDE FIELD OF VIEW ARRAY CAMERA FOR HEMISPHERIC AND SPHERICAL IMAGING - Aspects relate to methods and systems for producing ultra-wide field of view images. In some embodiments, an image capture system for capturing wide field-of-view images comprises an aperture, a central camera positioned to receive light through the aperture, the center camera having an optical axis, a plurality of periphery cameras disposed beside the central camera and pointed towards a portion of the optical axis of the center camera, the plurality of cameras arranged around the center camera, and a plurality of extendible reflectors. The reflectors are configured to move from a first position to a second position and have a mirrored first surface that faces away from the optical axis of the center camera and a second black surface that faces towards the optical axis of the center camera, the plurality of periphery cameras arranged around the center camera. | 12-24-2015 |