Lucasfilm Entertainment Company Ltd. Patent applications |
Patent application number | Title | Published |
20160093112 | DEEP IMAGE IDENTIFIERS - A method may include receiving a plurality of objects from a 3-D virtual scene. The plurality of objects may be arranged in a hierarchy. The method may also include generating a plurality of identifiers for the plurality of objects. The plurality of identifiers may include a first identifier for a first object in the plurality of objects, and the identifier may be generated based on a position of the first object in the hierarchy. The method may additionally include performing a rendering operation on the plurality of objects to generate a deep image. The deep image may include a plurality of samples that correspond to the first object. The method may further include propagating the plurality of identifiers through the rendering operation such that each of the plurality of samples in the deep image that correspond to the first object are associated with the identifier. | 03-31-2016 |
20160078675 | STYLING OF COMPUTER GRAPHICS HAIR THROUGH VOLUMETRIC FLOW DYNAMICS - Methods are disclosed for the computer generation of data for images that include hair, fur, or other strand-like material. A volume for the hair is specified, having a plurality of surfaces. A fluid flow simulation is performed within the volume, with a first surface of the volume being a source area through which fluid is simulated to enter the volume, and a second surface being an exit surface through which fluid is simulated as exiting the volume. The fluid flow simulation may be used to produce fluid flow lines, such as from a velocity vector field for the fluid. Fluid flow lines are selected, and image data of hairs that follow the fluid flow lines are generated. Other embodiments include generating animation sequences by generating images wherein the volume and surfaces vary between frames. | 03-17-2016 |
20150317765 | DEEP IMAGE DATA COMPRESSION - A method of compressing a deep image representation may include receiving a deep image, where the deep image may include multiple pixels, and where each pixel in the deep image may include multiple samples. The method may also include compressing the deep image by combining samples in each pixel that are associated with the same primitives. This process may be repeated on a pixel-by-pixel basis. Some embodiments may use primitive IDs to match pixels to primitives through the rendering and compositing process. | 11-05-2015 |
20150288956 | CALIBRATION TARGET FOR VIDEO PROCESSING - An apparatus is disclosed which may serve as a target for calibrating a camera. The apparatus comprises one or more planar surfaces. The apparatus includes at least one fiducial marking on a planar surface. The set of all planar markings on the apparatus are distinguishable. | 10-08-2015 |
20150288951 | AUTOMATED CAMERA CALIBRATION METHODS AND SYSTEMS - Methods and systems are disclosed for calibrating a camera using a calibration target apparatus that contains at least one fiducial marking on a planar surface. The set of all planar markings on the apparatus are distinguishable. Parameters of the camera are inferred from at least one image of the calibration target apparatus. In some embodiments, pixel coordinates of identified fiducial markings in an image are used with geometric knowledge of the apparatus to calculate camera parameters. | 10-08-2015 |
20150235407 | POST-RENDER MOTION BLUR - A method of applying a post-render motion blur to an object may include receiving a first image of the object. The first image need not be motion blurred, and the first image may include a first pixel and rendered color information for the first pixel. The method may also include receiving a second image of the object. The second image may be motion blurred, and the second image may include a second pixel and a location of the second pixel before the second image was motion blurred. Areas that are occluded in the second image may be identified and colored using a third image rendering only those areas. Unoccluded areas of the second image may be colored using information from the first image. | 08-20-2015 |
20150215623 | DYNAMIC LIGHTING CAPTURE AND RECONSTRUCTION - Systems and techniques for dynamically capturing and reconstructing lighting are provided. The systems and techniques may be based on a stream of images capturing the lighting within an environment as a scene is shot. Reconstructed lighting data may be used to illuminate a character in a computer-generated environment as the scene is shot. For example, a method may include receiving a stream of images representing lighting of a physical environment. The method may further include compressing the stream of images to reduce an amount of data used in reconstructing the lighting of the physical environment and may further include outputting the compressed stream of images for reconstructing the lighting of the physical environment using the compressed stream, the reconstructed lighting being used to render a computer-generated environment. | 07-30-2015 |
20150130801 | CONTROLLING A VIRTUAL CAMERA - Among other aspects, on computer-implemented method includes: receiving at least one command in a computer system from a handheld device; positioning a virtual camera and controlling a virtual scene according to the command; and in response to the command, generating an output to the handheld device for displaying a view of the virtual scene as controlled on a display of the handheld device, the view captured by the virtual camera as positioned. | 05-14-2015 |
20150084991 | POST-RENDER MOTION BLUR - A method of applying a post-render motion blur to an object may include receiving a first image of the object. The first image need not be motion blurred, and the first image may include a first pixel and rendered color information for the first pixel. The method may also include receiving a second image of the object. The second image may be motion blurred, and the second image may include a second pixel and a location of the second pixel before the second image was motion blurred. The method may additionally include locating the first pixel in the first image using the location of the second pixel before the second image was motion blurred. The method may further include coloring the second pixel using the rendered color information for the first pixel. | 03-26-2015 |
20150084950 | REAL-TIME PERFORMANCE CAPTURE WITH ON-THE-FLY CORRECTIVES - Techniques for facial performance capture using an adaptive model are provided herein. For example, a computer-implemented method may include obtaining a three-dimensional scan of a subject and a generating customized digital model including a set of blendshapes using the three-dimensional scan, each of one or more blendshapes of the set of blendshapes representing at least a portion of a characteristic of the subject. The method may further include receiving input data of the subject, the input data including video data and depth data, tracking body deformations of the subject by fitting the input data using one or more of the blendshapes of the set, and fitting a refined linear model onto the input data using one or more adaptive principal component analysis shapes. | 03-26-2015 |
20150077418 | THREE-DIMENSIONAL MOTION CAPTURE - In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures. | 03-19-2015 |
20150062863 | DYNAMIC LIGHTING - A movie set can include light sources each producing a light corresponding to a light channel, at least one high-frame rate camera, and a controller connected to the light sources and camera to synchronize the camera and light sources. The number of light channels can be proportional to the frame rate. For example, if the filming frame rate is 120 frames per second (fps) and the playback frame rate is 24 fps, then 5 light channels can be used. In this example, for every one playback frame, 5 frames were filmed by the high frame rate camera. The controller modulates the light channels such that each of the 5 frames has different lighting characteristics. In post-production, contributions from each of the light channels can be included or excluded from the final frame. An optical flow algorithm can be used to stitch together frames with different dynamic light characteristics. | 03-05-2015 |
20140313220 | CONSTRAINED VIRTUAL CAMERA CONTROL - A method is described that includes receiving, from a first device, input used to select a first object in a computer-generated environment. The first device has at least two degrees of freedom with which to control the selection of the first object. The method also includes removing, in response to the selection of the first object, at least two degrees of freedom previously available to a second device used to manipulating a second object in the computer-generated environment. The removed degrees of freedom correspond to the at least two degrees of freedom of the first device and specify an orientation of the second object relative to the selected first object. Additionally, the method includes receiving, from the second device, input including movements within the reduced degrees of freedom used to manipulate a position of the second object while maintaining the specified orientation relative to the selected first object. | 10-23-2014 |
20140219499 | VISUAL TRACKING FRAMEWORK - A computer program product tangibly embodied in a computer-readable storage medium includes instructions that when executed by a processor perform a method. The method includes identifying a frame of a video sequence, transforming a model into an initial guess for how the region appears in the frame, performing an exhaustive search of the frame, performing a plurality of optimization procedures, wherein at least one additional model parameter is taken into account as each subsequent optimization procedure is initiated. A system includes a computer readable storage medium, a graphical user interface, an input device, a model for texture and shape of the region, the model generated using the video sequence and stored in the computer readable storage medium, and a solver component. | 08-07-2014 |
20140192059 | CONTROLLING ANIMATED CHARACTER EXPRESSION - A system includes a computer system capable of representing one or more animated characters. The computer system includes a blendshape manager that combines multiple blendshapes to produce the animated character. The computer system also includes an expression manager to respectively adjust one or more control parameters associated with each of the plurality of blendshapes for adjusting an expression of the animated character. The computer system also includes a corrective element manager that applies one or more corrective elements to the combined blendshapes based upon at least one of the control parameters. The one or more applied corrective elements are adjustable based upon one or more of the control parameters absent the introduction of one or more additional control parameters. | 07-10-2014 |
20140168455 | CONTROLLING ROBOTIC MOTION OF CAMERA - Among other disclosed subject matter, a system includes a first camera generating a live image of a scene, the first camera configured for being placed in a plurality of locations by robotic motion. The system includes a handheld device that includes a display device for continuously presenting the live image, wherein movement of the handheld device causes the handheld device to generate an output that controls the robotic motion. | 06-19-2014 |
20140147014 | GEOMETRY TRACKING - A method of motion capture may include accessing a 3D model of a subject, and associating the 3D model of the subject with a 2D representation of the subject in a plurality of frames. The method may also include identifying a change to the 2D representation of the subject between two or more of the plurality of frames, and deforming the 3D model in a virtual 3D space. In some embodiments, the deforming may be based on the identified change to the 2D representation and at least one constraint restricting how the 3D model can be deformed. | 05-29-2014 |
20140085315 | COMBINING SHAPES FOR ANIMATION - A system includes a computing device that includes a memory for storing instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes combining, in a nonlinear manner, a first set of vertex displacements that represent the difference between a first animated expression and a neutral animated expression with a second set of vertex displacements that represent the difference between a second animated expression and the neutral animated expression. The number of vertices associated with the first set of vertex displacements of the first animated expression is equivalent to the number of vertices associated with the second set of vertex displacements of the second animated expression. | 03-27-2014 |
20140002452 | ADJUSTING STEREO IMAGES | 01-02-2014 |
20130346866 | COPYING AN OBJECT IN AN ANIMATION CREATION APPLICATION - A first input is received in an animation creation application having a user interface showing a virtual-space area for first objects, and a timeline area for second objects representing events. To generate the first input, a user presses and holds an input control while a cursor is over one of the first or second objects. It is determined whether the input control is held for at least a predefined duration. If so, a copy of the object is assigned to the cursor, and the copy is subsequently pasted at another location in the user interface upon the input control ceasing to be held after the cursor is moved to the other location. If the input control does not remain held for at least the predefined duration the copy of the object is not assigned to the cursor. | 12-26-2013 |
20130141427 | Path and Speed Based Character Control - A 3D animation environment that includes an animation object is generated. A movement speed is assigned to object the 3D animation environment. An animation path containing at least first and second waypoints is generated. An animation sequence is generated by identifying a first section of the animation path connected to the first waypoint. A first animation of the animation object is generated in which the animation object moves along the first section of the path at the movement speed. A spatial gap in the animation path is identified between the first and second waypoints. A second animation of the animation object is generated in which the animation object moves, by keyframe animation, from the first waypoint to the second waypoint. A third animation of the animation object is generated in which the animation object moves along at least a second portion of the path at the movement speed. | 06-06-2013 |
20130132835 | Interaction Between 3D Animation and Corresponding Script - Interaction between a 3D animation and a corresponding script includes: displaying a user interface that includes at least a 3D animation area and a script area, the 3D animation area including (i) a 3D view area for creating and playing a 3D animation and (ii) a timeline area for visualizing actions by one or more 3D animation characters, the script area comprising one or more objects representing lines from a script having one or more script characters; receiving a first user input corresponding to a user selecting at least one of the objects from the script area for assignment to a location in the timeline area; generating a timeline object at the location in response to the first user input, the timeline object corresponding to the selected object; and associating audio data with the generated timeline object, the audio data corresponding to a line represented by the selected object. | 05-23-2013 |
20130021343 | Translating Renderman Shading Language Code - The present disclosure includes, among other things, systems, methods and program products for translating RenderMan shading language code. | 01-24-2013 |
20130016876 | SCALE INDEPENDENT TRACKING PATTERNAANM Wooley; KevinAACI San FranciscoAAST CAAACO USAAGP Wooley; Kevin San Francisco CA USAANM Mallet; RonaldAACI Mill ValleyAAST CAAACO USAAGP Mallet; Ronald Mill Valley CA US - In one aspect, a computer implemented method of motion capture, the method includes tracking the motion of a dynamic object bearing a pattern configured such that a first portion of the patterns is tracked at a first resolution and a second portion of the pattern is tracked at a second resolution. The method further includes causing data representing the motion to be stored to a computer readable medium. | 01-17-2013 |
20120327088 | Editable Character Action User Interfaces - A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes defining at least one of a location in a virtual scene and a time represented in a timeline as being associated with a performance of an animated character. The method also includes aggregating data that represents actions of the animation character for at least one of the defined location and the defined time. The method also includes presenting a user interface that includes a representation of the aggregated actions. The representation is editable to adjust at least one action included in the aggregation. | 12-27-2012 |
20120320066 | Modifying an Animation Having a Constraint - A computer-implemented method for handling a modification of an animation having a constraint includes detecting a user modification of an animation that involves at least first and second objects, the first object constrained to the second object during a constrained period and non-constrained to the second object during a non-constrained period. The method includes, based on the user modification, selecting one of at least first and second compensation adjustments for the animation based on a compensation policy; and adjusting the animation according to the selected compensation adjustment. | 12-20-2012 |
20120299914 | ACCELERATED SUBSURFACE SCATTERING DETERMINATION FOR RENDERING 3D OBJECTS - The present disclosure includes, among other things, systems, methods, and program products for estimating radiant exitance due to subsurface scattering. For example, one or more aspects of the subject matter described in this disclosure can be embodied in one or more methods that include distributing a plurality of sample points across the surface of a 3D object model to be rendered into a 2D image and determining a solid angle subtended by a first sample point and a second sample point relative to a region on the 3D object model. Depending on the determined solid angle relative to a threshold value, a previously determined subsurface scattering contribution for the region or a newly determined subsurface scattering contribution for the region may selectively be used for rendering a portion of the 2D image. | 11-29-2012 |
20120226983 | Copying an Object in an Animation Creation Application - A first input is received in an animation creation application having a user interface showing a virtual-space area for first objects, and a timeline area for second objects representing events. To generate the first input, a user presses and holds an input control while a cursor is over one of the first or second objects. It is determined whether the input control is held for at least a predefined duration. If so, a copy of the object is assigned to the cursor, and the copy is subsequently pasted at another location in the user interface upon the input control ceasing to be held after the cursor is moved to the other location. If the input control does not remain held for at least the predefined duration the copy of the object is not assigned to the cursor. | 09-06-2012 |
20120218286 | GENERATING A SURFACE REPRESENTATION OF AN ITEM - Among other disclosed subject matter, a computer-implemented method for generating a surface representation of an item includes identifying, for a point on an item in an animation process, at least first and second transformation points corresponding to respective first and second transformations of the point. Each of the first and second transformations represents an influence on a location of the point of respective first and second joints associated with the item. The method includes determining an axis for a cylindrical coordinate system using the first and second transformations. The method includes performing an interpolation of the first and second transformation points in the cylindrical coordinate system to obtain an interpolated point. The method includes recording the interpolated point in a surface representation of the item in the animation process. | 08-30-2012 |
20120002017 | Three-Dimensional Motion Capture - In one general aspect, a method is described. The method includes generating a positional relationship between one or more support structures having at least one motion capture mark and at least one virtual structure corresponding to geometry of an object to be tracked and positioning the support structures on the object to be tracked. The support structures has sufficient rigidity that, if there are multiple marks, the marks on each support structure maintain substantially fixed distances from each other in response to movement by the object. The method also includes determining an effective quantity of ray traces between one or more camera views and one or more marks on the support structures, and estimating an orientation of the virtual structure by aligning the determined effective quantity of ray traces with a known configuration of marks on the support structures. | 01-05-2012 |
20080231640 | Animation Retargeting - Systems and methods are described, which create a mapping from a space of a source object (e.g., source facial expressions) to a space of a target object (e.g., target facial expressions). In certain implementations, the mapping is learned based a training set composed of corresponding shapes (e.g. facial expressions) in each space. The user can create the training set by selecting expressions from, for example, captured source performance data, and by sculpting corresponding target expressions. Additional target shapes (e.g., target facial expressions) can be interpolated and extrapolated from the shapes in the training set to generate corresponding shapes for potential source shapes (e.g., facial expressions). | 09-25-2008 |