Patent application number | Description | Published |
20130088511 | E-BOOK READER WITH OVERLAYS - An e-book reader application for the desktop, laptop, tablet, or other mobile computing device configurations, and web/cloud environments provides user interactions with overlays that present the reader with immediate access to assets including equations, figures, references, index, source code, equation variables, and other parameters (including but not limited to graphical plots, videos, slide presentations, etc.) referenced in a given page using a pop-up window or a tool-tip to display the desired asset. The e-book reader application also provides additional tools such as sharing code and review materials, setting access restrictions, and tagging audio/video presentations for authors as well as highlighting, bookmarking, and notes-taking capabilities for readers. | 04-11-2013 |
20150092076 | Image Capture Accelerator - An image capture accelerator performs accelerated processing of image data. In one embodiment, the image capture accelerator includes accelerator circuitry including a pre-processing engine and a compression engine. The pre-processing engine is configured to perform accelerated processing on received image data, and the compression engine is configured to compress processed image data received from the pre-processing engine. In one embodiment, the image capture accelerator further includes a demultiplexer configured to receive image data captured by an image sensor array implemented within, for example, an image sensor chip. The demultiplexer may output the received image data to an image signal processor when the image data is captured by the image sensor array in a standard capture mode, and may output the received image data to the accelerator circuitry when the image data is captured by the image sensor array in an accelerated capture mode. | 04-02-2015 |
20150256746 | AUTOMATIC GENERATION OF VIDEO FROM SPHERICAL CONTENT USING AUDIO/VISUAL ANALYSIS - A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include anon-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video o generate an output video that tracks a particular individual or object of interest. | 09-10-2015 |
20150256808 | GENERATION OF VIDEO FROM SPHERICAL CONTENT USING EDIT MAPS - A spherical content capture system captures spherical video content. A spherical video sharing platform enables users to share the captured spherical content and enables users to access spherical content shared by other users. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. | 09-10-2015 |
20160005435 | AUTOMATIC GENERATION OF VIDEO AND DIRECTIONAL AUDIO FROM SPHERICAL CONTENT - A spherical content capture system captures spherical video and audio content. In one embodiment, captured metadata or video/audio processing is used to identify content relevant to a particular user based on time and location information. The platform can then generate an output video from one or more shared spherical content files relevant to the user. The output video may include a non-spherical reduced field of view such as those commonly associated with conventional camera systems. Particularly, relevant sub-frames having a reduced field of view may be extracted from each frame of spherical video to generate an output video that tracks a particular individual or object of interest. For each sub-frame, a corresponding portion of an audio track is generated that includes a directional audio signal having a directionality based on the selected sub-frame. | 01-07-2016 |
20160029004 | Image Blur Based on 3D Depth Information - Blurring is simulated in post-processing for captured images. A 3D image is received from a 3D camera, and depth information in the 3D image is used to determine the relative distances of objects in the image. One object is chosen as the subject of the image, and an additional object in the image is identified. Image blur is applied to the identified additional object based on the distance between the 3D camera and the subject object, the distance between the subject object and the additional object, and a virtual focal length and virtual f-number. | 01-28-2016 |
20160055381 | Scene and Activity Identification in Video Summary Generation Based on Motion Detected in a Video - Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. In one example, best scenes are identified based on the motion values associated with frames or portions of a frame of a video. Motion values are determined for each frame and portions of the video including frames with the most motion are identified as best scenes. Best scenes may also be identified based on the motion profile of a video. The motion profile of a video is a measure of global or local motion within frames throughout the video. For example, best scenes are identified from portion of the video including steady global motion. A video summary can be generated including one or more of the identified best scenes. | 02-25-2016 |