Entries |
Document | Title | Date |
20080204457 | Rig Baking - Model components can be used to pose character models to create a variety of realistic and artistic effects. An embodiment of the invention analyzes the behavior of a model component to determine a statistical representation of the model component that closely approximates the output of the model component. As the statistical representation of model components execute faster than the original model components, the model components used to pose a character model can be replaced at animation time by equivalent statistical representations of model components to improve animation performance. The statistical representation of the model component is derived from an analysis of the character model manipulated through a set of representative training poses. The statistical representation of the model component is comprised of a weighted combination of posed frame positions added to a set of posing errors controlled by nonlinear combinations of the animation variables. | 08-28-2008 |
20080218523 | SYSTEM AND METHOD FOR NAVIGATION OF DISPLAY DATA - Navigating display data (e.g., large documents) on an electronic display is described in which a first set of visual indicators are layered over the portion of the portion of data displayed on the electronic display. The user selects a particular navigation task, which selection signal is received by the navigation application. The navigation application determines a section of interest based on the particular navigation task selected and layers a second set of visual indicators over the portion of the display data defined by all of the sections other than the section of interest. The navigation application then animates movement of the display data and both sets of visual indicators on the electronic display according to the particular navigation task selected. | 09-11-2008 |
20080259085 | Method for Animating an Image Using Speech Data - A method for animating an image is useful for animating avatars using real-time speech data. According to one aspect, the method includes identifying an upper facial part and a lower facial part of the image (step | 10-23-2008 |
20080266299 | METHOD FOR PREDICTIVELY SPLITTING PROCEDURALLY GENERATED PARTICLE DATA INTO SCREEN-SPACE BOXES - A method for use in rendering includes receiving an input particle system, an instancing program, and a number indicating a maximum number of particles to be stored in memory, providing an input particle count representative of at least a portion of the input particle system to at least one operator for the instancing program, running the at least one operator in a prediction mode to generate an output particle count, comparing the output particle count to the number indicating a maximum number of particles to be stored in memory, and spatially splitting a bounding box representative of the input particle count in response to the output particle count being greater than the number indicating a maximum number of particles to be stored in memory. | 10-30-2008 |
20080273037 | LOOPING MOTION SPACE REGISTRATION FOR REAL-TIME CHARACTER ANIMATION - A method for generating a looping motion space for real-time character animation may include determining a plurality of motion clips to include in the looping motion space and determining a number of motion cycles performed by a character object depicted in each of the plurality of motion clips. A plurality of looping motion clips may be synthesized from the motion clips, where each of the looping motion clips depicts the character object performing an equal number of motion cycles. Additionally, a starting frame of each of the plurality of looping motion clips may be synchronized so that the motion cycles in each of the plurality of looping motion clips are in phase with one another. By rendering an animation sequence using multiple passes through the looping motion space, an animation of the character object performing the motion cycles may be extended for arbitrary length of time. | 11-06-2008 |
20080284783 | WAVE ZONES RENDERING TECHNIQUE - Rendering a deforming object in animation including: defining a deforming object surface angle; identifying a normal vector discontinuity point using the deforming object surface angle; defining front part and back part of the deforming object with reference to the normal vector discontinuity point; dividing the front part of the deforming object into zones based on the deforming object surface angle; dividing the back part of the deforming object into zones based on the deforming object surface angle; and rendering each zone. | 11-20-2008 |
20080297515 | METHOD AND APPARATUS FOR DETERMINING THE APPEARANCE OF A CHARACTER DISPLAY BY AN ELECTRONIC DEVICE - A method and an electronic device are for for selecting apparel for a character that is generated by an electronic device. The method and electronic device determine a changed context of the character, select an updated set of apparel for the character based on the changed context of the character, change the apparel of the character according to the updated set of new apparel; and present the character having the updated set of apparel on a display. | 12-04-2008 |
20080297516 | Generating a Surface Representation of an Item - Among other disclosed subject matter, a computer-implemented method for generating a surface representation of an item includes identifying, for a point on an item in an animation process, at least first and second transformation points corresponding to respective first and second transformations of the point. Each of the first and second transformations represents an influence on a location of the point of respective first and second joints associated with the item. The method includes determining an axis for a cylindrical coordinate system using the first and second transformations. The method includes performing an interpolation of the first and second transformation points in the cylindrical coordinate system to obtain an interpolated point. The method includes recording the interpolated point in a surface representation of the item in the animation process. | 12-04-2008 |
20080297517 | Transitioning Between Two High Resolution Images in a Slideshow - A method of transitioning between two high resolution images in a slideshow includes replacing a first image with a lower resolution copy of that first image and fading out the lower resolution copy of the first image to reveal a second image. A system for transitioning between two high resolution images in a slideshow includes a video chip having a first video buffer for containing a first image, a second video buffer for containing a second image, and a graphic buffer for containing a lower resolution copy of the first image. The chip is configured to replace the first image with the lower resolution copy of the first image and fade out the lower resolution copy of the first image to reveal the second image. | 12-04-2008 |
20080303826 | Methods and Systems for Animating Displayed Representations of Data Items - Methods and systems for animating visual components representing data items. One embodiment comprises a method for producing an application using declarative language code to specify animation behavior for data item representations. A programming application may be used to create the declarative language code using a display design area for placing and adjusting objects such as data item containers and/or an editor for entering and editing code. One embodiment comprises a method that allows an application, such as a rich Internet application, to create representations of displayed objects and virtually displayed objects to facilitate animation. One embodiment involves facilitating animation using initial and changed layouts, such layouts including representing of a limited number of data items both inside and outside the content display area. In certain embodiments, a computer-readable medium (such as, for example random access memory or a computer disk) comprises code for carrying out these and other methods. | 12-11-2008 |
20080303827 | Methods and Systems for Animating Displayed Representations of Data Items - Methods and systems for animating visual components representing data items. One embodiment comprises a method for producing an application using declarative language code to specify animation behavior for data item representations. A programming application may be used to create the declarative language code using a display design area for placing and adjusting objects such as data item containers and/or an editor for entering and editing code. One embodiment comprises a method that allows an application, such as a rich Internet application, to create representations of displayed objects and virtually displayed objects to facilitate animation. One embodiment involves facilitating animation using initial and changed layouts, such layouts including representing of a limited number of data items both inside and outside the content display area. In certain embodiments, a computer-readable medium (such as, for example random access memory or a computer disk) comprises code for carrying out these and other methods. | 12-11-2008 |
20080303828 | Web-based animation - Approaches providing web-based animations using tools and techniques that take into account the limited capabilities and resources available in the web environment are disclosed. In some embodiments, such web-based animations are implemented in JavaScript. | 12-11-2008 |
20080303829 | Sex selection in inheritance based avatar generation - The generation of characters within computer animations is currently a labor intensive and expensive activity for a wide range of businesses. Whereas prior art approaches have sought to reduce this loading by providing reference avatars, these do not fundamentally overcome the intensive steps in generating these reference avatars, and they provide limited variations. According to the invention a user is provided with a simple and intuitive mechanism to affect the weightings applied in establishing the physical characteristics of an avatar generated using an inheritance based avatar generator. The inheritance based generator allowing, for example, the user to select a first generation of four grandparents, affect the weightings in generating the second generation parents, and affect the weightings in generating the third generation off-spring avatar from these parents. Accordingly the invention provides animators with a means of rapidly generating and refining the off-spring avatar to provide the character for their animated audio-visual content. | 12-11-2008 |
20080303830 | Automatic feature mapping in inheritance based avatar generation - The generation of characters within computer animations is currently a labor intensive and expensive activity for a wide range of businesses. Whereas prior art approaches have sought to reduce this loading by providing reference avatars, these do not fundamentally overcome the intensive steps in generating these reference avatars, and they provide limited variations. According to the invention a user is provided with a simple and intuitive mechanism to affect the weightings applied in establishing the physical characteristics of an avatar generated using an inheritance based avatar generator. The inheritance based generator allowing, for example, the user to select a first generation of four grandparents, affect the weightings in generating the second generation parents, and affect the weightings in generating the third generation off-spring avatar from these parents. Accordingly the invention provides animators with a means of rapidly generating and refining the off-spring avatar to provide the character for their animated audio-visual content. | 12-11-2008 |
20080303831 | Transfer of motion between animated characters - Motion may be transferred between portions of two characters if those portions have a minimum topological similarity. The elements of the topology that are similar are referred to as basic elements. To transfer motion between the source and target characters, the motion associated with the basic elements of the source character is determined. This motion is retargetted to the basic elements of the target character. The retargetted motion is then attached to the basic elements of the target character. As a result, the animation of the basic elements in the topology of the target character effectively animates the target character with motion that is similar to that of the source character. | 12-11-2008 |
20080309670 | Recasting A Legacy Web Page As A Motion Picture With Audio - Computer-implemented methods, systems, and computer program products are provided for recasting a legacy web page as a motion picture with audio. Embodiments include retrieving a legacy web page; identifying audio objects in the legacy web page for audio rendering; identifying video objects in the legacy web page for motion picture rendering; associating one or more of the video objects for motion picture rendering with one or more of the audio objects for audio rendering; determining in dependence upon the selected audio objects and video objects a duration for the motion picture; selecting audio events for rendering the audio objects identified for audio rendering; selecting motion picture video events for rendering the video objects identified for motion picture rendering; assigning the selected audio events and the selected video events to playback times for the motion picture; rendering, with the selected audio events at their assigned playback times, the audio content of the each of the audio objects identified for audio rendering; rendering, with the selected motion picture video events at their assigned playback times, the video content of the video objects identified for motion picture rendering; and recording in a multimedia file the rendered audio content and motion picture video content. | 12-18-2008 |
20090002376 | Gradient Domain Editing of Animated Meshes - Gradient domain editing of animated meshes is described. Exemplary systems edit deforming mesh sequences by applying Laplacian mesh editing techniques in the spacetime domain. A user selects relevant frames or handles to edit and the edits are propagated to the entire sequence. For example, if the mesh depicts an animated figure, then user-modifications to position of limbs, head, torso, etc., in one frame are propagated to the entire sequence. In advanced editing modes, a user can reposition footprints over new terrain and the system automatically conforms the walking figure to the new footprints. A user-sketched curve can automatically provide a new motion path. Movements of one animated figure can be transferred to a different figure. Caricature and cartoon special effects are available. The user can also select spacetime morphing to smoothly change the shape and motion of one animated figure into another over a short interval. | 01-01-2009 |
20090002377 | APPARATUS AND METHOD FOR SYNCHRONIZING AND SHARING VIRTUAL CHARACTER - An apparatus and method for synchronizing and sharing a virtual character are provided. The method includes generating a virtual character, synchronizing content in a predetermined form with the generated virtual character; converting the virtual character into an extensible markup language (XML)-based file and storing the XML-based file. | 01-01-2009 |
20090009520 | Animation Method Using an Animation Graph - A method of animating a scene graph (M), which comprises steps for: creating (e | 01-08-2009 |
20090027400 | Animation of Audio Ink - In a pen-based computing system, a microphone on the smart pen device records audio to produce audio data and a gesture capture system on the smart pen device records writing gestures to produce writing gesture data. Both the audio data and the writing gesture data include a time component. The audio data and writing gesture data are combined or synchronized according to their time components to create audio ink data. The audio ink data can be uploaded to a computer system attached to the smart pen device and displayed to a user through a user interface. The user makes a selection in the user interface to play the audio ink data, and the audio ink data is played back by animated the captured writing gestures and playing the recorded audio in synchronization. | 01-29-2009 |
20090033666 | SCNEARIO GENERATION DEVICE, SCENARIO GENERATION METHOD, AND SCENARIO GENERATION PROGRAM - There is provided a scenario generation device capable of automatically generating a scenario for generating an animation of rich expression desired by a user even from a text created by the user who has no special knowledge about creation of animation. In the device, a scenario generation unit ( | 02-05-2009 |
20090051690 | Motion line switching in a virtual environment - A computing system enhances the human-like realism of computer opponents in racing-type games and other motion-related games. The computing system observes multiple prescribed motion lines and computes switching probabilities attributed to switching of simulated motion of a racer from one prescribed motion line to another. A sampling module samples at random over the switching probabilities to select one of the switching probabilities. At least one control signal is generated to switch simulated motion of the entity in a virtual reality environment from the first prescribed motion line to one of the other prescribed motion lines, in accordance with the selected one of the switching probabilities. | 02-26-2009 |
20090051691 | IMAGE DISPLAY APPARATUS - An object of the present invention is to be able to output moving image data that enables a desired image group partially included in all intra-subject images to be played as a moving image. An image display apparatus according to the present invention includes an image display function of displaying a series of images obtained by picking up an interior of a digestive canal of a subject at time series, and includes an input unit | 02-26-2009 |
20090051692 | Electronic presentation system - The invention provides a digital, animation presentation system for dramatically presenting various works. The digital and animated presentation in accordance with the invention is not limited by the conventions of paper books or electronic books that mimic paper-based books, and provides for the dramatic presentation of animation and animated text that includes text moving forward or backwards across the reader's display as well as appearing to move forward or away from the reader. The invention is applicable to a variety of works, including various fiction and non-fiction stories, educational materials, as well as tutorials and instruction manuals. In accordance with the invention, a reader can control his or her viewing of the digital animation and text so that he or she can view a story in its natural forward progression, pause and/or stop and re-read a section, return to an earlier section and/or skip ahead to a later section. The invention also provides for dramatic presentation and animation of the text as well as animation and sound effects that correlate to the text. | 02-26-2009 |
20090058862 | AUTOMATIC AVATAR TRANSFORMATION FOR A VIRTUAL UNIVERSE - An approach that automatically transforms an avatar characteristic of an avatar that is online in a virtual universe is described. In one embodiment, there is an avatar locator component configured to locate an avatar that is online in the virtual universe. An avatar characteristics transforming component is configured to automatically transform the avatar characteristic associated with the located avatar according to predetermined transformation criteria. | 03-05-2009 |
20090058863 | Image animation with transitional images - A technique is provided for animating an image or a portion of an image. In accordance with this technique, intermediary or transitional images, referred to as offset images, are displayed as part of an animation step to lessen abrupt changes in pixel values. In one embodiment, the offset images are generated using a weighted average of proximate pixels. In such an embodiment, the weight factor may take into account the distance of the offset from the proximate pixels such that closer pixels are more heavily weighted. Based on the direction of movement for the animation, the offset images are ordered and displayed as part of the animation steps of an animation sequence. | 03-05-2009 |
20090066700 | FACIAL ANIMATION USING MOTION CAPTURE DATA - Methods and apparatus for facial animation using motion capture data are described herein. A mathematic solution based on minimizing a metric reduces the number of motion capture markers needed to accurately translate motion capture data to facial animation. A set of motion capture markers and their placement on an actor are defined and a set of virtual shapes having virtual markers are defined. The movement of the virtual markers are modeled based on an anatomical model. An initial facial capture is correlated to a corresponding virtual reference shape. For each subsequent facial capture, a delta vector is computed and a matrix solution determined based on the delta marker, initial positions, and set of virtual shapes. The solution can minimize a metric such as mean squared distance. The solution can be manually modified or edited using a user interface or console. | 03-12-2009 |
20090066701 | IMAGE BROWSING METHOD AND IMAGE BROWSING APPARATUS THEREOF - An image browsing method includes: detecting a movement corresponding to a user input to generate a detecting variation; checking if the detecting variation is greater than a predetermined threshold value; when the detecting variation is greater than the predetermined threshold value, displaying an animation indicative of completely turning a page for showing a target image instead of a current image in order to allow browsing of the target image; and when the detecting variation is not greater than the predetermined threshold value, displaying the current image for browsing of the current image. | 03-12-2009 |
20090066702 | Development Tool for Animated Graphics Application - A presentation engine collects information concerning the rendering of the frames of an animated graphics application, such the time taken for rendering the frame and the amount of memory used. This information quantifies the amount of certain computing resources being utilized on a per-frame basis, enabling identification by the authors of the animated graphics application, particularly by the designers of the animated graphics, of frames that are problematic, especially on resource-limited devices. The generation of information does not depend on the animated graphics application being instrumented to generate the metrics. The method is adaptable to any resource-limited device, to which the presentation engine is ported or adapted to run. When executing on a resource-limited device, the information is sent to a workstation for analysis. An analysis tool, which may be a stand-alone program or part of an authoring tool or other program, displays the collected metrics graphically in relation to the frame. | 03-12-2009 |
20090066703 | CONSTRAINT SCHEMES FOR COMPUTER SIMULATION OF CLOTH AND OTHER MATERIALS - Constraint schemes for use in the computer simulation and animation of cloth, clothing and other materials helps to prevent clothing from excessive stretching, bunching up in unwanted areas, or “passing through” rigid objects during collisions. Several types of constraint systems are employed, including the use of skinned vertices as constraints and axial constraints. In these schemes cloth simulated vertices are generated for the material using a cloth simulation technique, and skinned vertices are generated for the material using a skin simulation technique. One or more of the cloth simulated vertices are compared to the corresponding skinned vertices. The cloth simulated vertices are modified if they deviate from the corresponding skinned vertices by more than a certain amount. Vertical constraints are also employed, which involve generating a first set of vertices for the material using a cloth simulation technique, comparing a vertical component of each of the first set of vertices to a lower limit for each of the first set of vertices, and for each vertical component that falls below the lower limit, modifying the vertical component to be equal to the lower limit. | 03-12-2009 |
20090079743 | DISPLAYING ANIMATION OF GRAPHIC OBJECT IN ENVIRONMENTS LACKING 3D REDNDERING CAPABILITY - Three dimensional (3D) animations of an avatar graphic object are displayed in an environment that lacks high quality real-time 3D animation rendering capability. Before the animation is displayed in the environment at runtime, corresponding 3D and 2D reference models are created for the avatar. The 2D reference model is provided in a plurality of different views or reference angles. A 3D animation rendering program is used to produce 3D motion data for each animation. The 3D motion data define a position and rotation of parts of the 3D reference model. Image files are prepared for art assets drawn on associated parts of the 2D reference model in all views. At runtime in the environment, the position, rotation, and layer of each avatar part in 3D space is mapped to 2D space for each successive frame of an animation, with selected art assets applied to the associated parts of the avatar. | 03-26-2009 |
20090079744 | ANIMATING OBJECTS USING A DECLARATIVE ANIMATION SCHEME - Technologies are described herein for animating objects through the use of animation schemes. An animation scheme is defined using a declarative language that includes instructions defining the animations and/or visual effects to be applied to one or more objects and how the animations or visual effects should be applied. The animation scheme may include rules which, when evaluated, define how the objects are to be animated. An animation scheme engine is also provided for evaluating an animation scheme along with other factors to apply the appropriate animation to each of the objects. The animation scheme engine retrieves an animation scheme and data regarding the objects. The animation scheme engine then evaluates the animation scheme along with the data regarding the objects to identify the animation to be applied to each object. The identified animations and visual effects are then applied to the objects. | 03-26-2009 |
20090096796 | Animating Speech Of An Avatar Representing A Participant In A Mobile Communication - Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme. | 04-16-2009 |
20090128567 | MULTI-INSTANCE, MULTI-USER ANIMATION WITH COORDINATED CHAT - Two or more participants provide inputs from a remote location to a central server, which aggregates the inputs to animate participating avatars in a space visible to the remote participants. In parallel, the server collects and distributes text chat data from and to each participant, such as in a chat window, to provide chat capability in parallel to a multi-participant animation. Avatars in the animation may be provided with animation sequences, based on defined character strings or other data detected in the text chat data. Text data provided by each user is used to select animation sequences for an avatar operated by the same user. | 05-21-2009 |
20090135187 | SYSTEM AND METHOD FOR DYNAMICALLY GENERATING RESPONSE MOTIONS OF VIRTUAL CHARACTERS IN REAL TIME AND COMPUTER-READABLE RECORDING MEDIUM THEREOF - A system and a method for dynamically generating response motions of a virtual character in real time and a computer-readable recording medium thereof are provided. The system includes a balance state module, response graph module, and a tracking control module. The balance state module calculates a balance state of the virtual character according to the balance-related information of a character model of the virtual character. The response graph module is coupled to the balance state module for providing a response motion according to the balance state. The tracking control module is coupled to the response graph module for providing a driving information according to the response motion and a body information of the character model. The driving information is used for driving the character model to converge toward the response motion. | 05-28-2009 |
20090135188 | METHOD AND SYSTEM OF LIVE DETECTION BASED ON PHYSIOLOGICAL MOTION ON HUMAN FACE - A method and a system of live detection based on a physiological motion on a human face are provided. The method has the following steps: in step a, a motion area and at least one motion direction in visual angle of a system camera are detected and a detected facial region is found. In step b, whether a valid facial motion exists in the detected facial region is determined. If a valid facial motion is inexistent, the object is considered as a photo of human face, otherwise, the method proceeds to step c to determine whether the facial motion is a physiological motion. If not, the object is considered as the photo of human face, yet considered as a real human face. The real human face and the photo of human face can be distinguished by the present invention so as to increase the reliability of the face recognition system. | 05-28-2009 |
20090135189 | Character animation system and method - A character animation system includes a data generating unit for generating a character skin mesh and an internal reference mesh, a character bone value, and a character solid-body value, a skin distortion representing unit for representing skin distortion using the generated character skin mesh and the internal reference mesh when an external shock is applied to a character, and a solid-body simulation engine for applying the generated character bone value and the character solid-body value to a real-time physical simulation library and representing character solid-body simulation. The system further includes a skin distortion and solid-body simulation processing unit for processing to return to a key frame to be newly applied after the skin distortion and the solid-body simulation are represented. | 05-28-2009 |
20090141030 | SYSTEM AND METHOD FOR MULTILEVEL SIMULATION OF ANIMATION CLOTH AND COMPUTER-READABLE RECORDING MEDIUM THEREOF - A system for multilevel simulation of an animation cloth is provided. The system includes a multilevel area generation module, a curvature calculation module, a curvature comparison module, and a dynamic simulation module. The multilevel area generation module divides a plurality of grid units of the animation cloth into a plurality of level sub-areas based on a multilevel technique, wherein each of the level sub-areas is generated by dividing an upper level sub-area. The curvature calculation module calculates the curvatures of the level sub-areas according to the plane vectors of the grid units in a frame. The curvature comparison module compares the curvatures of the level sub-areas with a flatness threshold. The dynamic simulation module calculates the plane vector of each grid unit in a next frame through different method according to the comparison result of the curvature comparison module. | 06-04-2009 |
20090141031 | Method for Providing an Animation From a Prerecorded Series of Still Pictures - The invention relates to a method for providing an animation from prerecorded still pictures where the relative positions of the pictures are known. The method is based on prerecorded still pictures and location data, associated with each still picture, that indicates the projection of the subsequent still picture into the current still picture. The method comprises the repeated steps of providing a current still picture, providing the location data associated with the still picture, generating an animation based on the current still picture and the location data, and presenting the animation on a display. The invention provides the experience of driving a virtual car through the photographed roads, either by an auto pilot or manually. The user may change speed, drive, pan, shift lane, turn in crossings or take u-turns anywhere. Also, the invention provides a means to experience real time, interactive video-like animation from widely separated still pictures, as an alternative to video-streaming over a communication line. This service is called Virtual Car Travels. | 06-04-2009 |
20090147008 | Arrangements for controlling activites of an avatar - Systems are disclosed herein that allow a participant to be associated with an avatar and receive a transmission from the participant in response to a participant activated transmission. The transmission can include a participant selectable and time delayed mood and/or activity command which can be associated with a user configurable command to avatar activity conversion table. The associated avatar activity table can provide control signal to the VU system controlling the participant's avatar for extended time periods, where the activity commands allow the avatar to exhibit a mood and to conduct an activity. The preconfigured time controlled activity commands allow the user to control their avatar without being actively engaged in a session with a virtual universe client or logged on and the control configuration can be set up such that a single mood/activity control signal can initiate moods and activities that occur over an extended period of time. | 06-11-2009 |
20090147009 | VIDEO CREATING DEVICE AND VIDEO CREATING METHOD - A video creating device for creating a video full of originality from a text. A video viewer ( | 06-11-2009 |
20090147010 | GENERATION OF VIDEO - An apparatus and a method are provided for generating video data derived from the execution of a computer program. In a first mode, the apparatus is operable to (a) execute a computer program comprising one or more components executed in a sequence of execution frames, each execution frame having a given state; and (b) record video data comprising a sequence of video data frames corresponding to the sequence of execution frames. In a second mode, the apparatus is operable to (c) process video data which have been recorded during the previous execution of the program, to allow a visualization of the execution of that program; and (d) allow a user, at any frame of the sequence of video data frames, to change the mode to the first mode and to obtain from the video data the state of the corresponding execution frame of the program. | 06-11-2009 |
20090153565 | METHODS AND APPARATUS FOR DESIGNING ANIMATRONICS UNITS FROM ARTICULATED COMPUTER GENERATED CHARACTERS - A method for specifying a design for an animatronics unit includes receiving motion data comprising artistically determined motions, determining a design for construction of at least a portion of the animatronics unit in response to the motion data, and outputting the design for construction of the animatronics unit. | 06-18-2009 |
20090153566 | METHODS AND APPARATUS FOR ESTIMATING AND CONTROLLING BEHAVIOR OF ANIMATRONICS UNITS - A method for determining behavior of an animatronics unit includes receiving animation data comprising artistically determined motions for at least a portion of an animated character, determining a plurality of control signals to be applied to at least the portion of the animatronics unit in response to the animation data, estimating the behavior of at least the portion of the animatronics unit in response to the plurality of control signals by driving a software simulation of at least the portion of the animatronics unit with the plurality of control signals, and outputting a representation of the behavior of at least the portion of the animatronics unit to a user. | 06-18-2009 |
20090153567 | Systems and methods for generating personalized computer animation using game play data - Systems, methods, and computer storage media for generating a computer animation of a game. A custom animation platform receives game play data of the game and determines at least one scene based on the game play data. Then, one or more frames in the scene are set up, where at least one of the frames includes at least one non-game pre-production element of the game. Subsequently, the frames are rendered and the rendered frames are combined to generate a computer animation. | 06-18-2009 |
20090167766 | ADVERTISING REVENUE SHARING - Technologies are described herein for sharing advertisement revenue. An advertiser-generated avatar is provided to a first participant. The advertiser-generated avatar may be associated with an advertisement. Further, the first participant may be associated with a current avatar. The current avatar is replaced with the advertiser-generated avatar. While the first participant is associated with the advertiser-generated avatar, a level of interaction between the first participant and other participants is monitored. An amount of compensation to provide the first participant is determined based on the level of interaction between the first participant and the other participants. The compensation is provided to the first participant. | 07-02-2009 |
20090167767 | GROWING AND CARING FOR A VIRTUAL CHARACTER ON A MOBILE DEVICE - A computer implemented method, data processing system, and computer program product for enabling a user to create and care for a virtual character on a user communication device. The data processing system includes a server connected via a communication network to a user communication device. In response to a user action, a software application with a virtual character in it is downloaded into the user communication device. The virtual character interacts with the user through the software application in accordance with predefined characteristics. The user is enabled to perform virtual actions relating to the virtual character, responsive to the virtual character behavior, by sending data (e.g., by SMS or MMS) using the user communication device over the communication network to the server. The virtual character is responsive to data sent to it from the server in response to the user's virtual actions, in accordance with the virtual character's predefined characteristics. | 07-02-2009 |
20090167768 | Selective frame rate display of a 3D object - Systems and methods are discussed for performing 3D animation of an object using limited hardware resources. When an object is rotated, the size of the object displayed progressively increases, thus taking up more memory, CPU, and other hardware resources. To limit the impact on resources as an object becomes larger, the electronic device may select to display more small frames of the object at a higher frame rate, and fewer large frames at a lower frame rate, thus providing a uniform 3D animation. | 07-02-2009 |
20090167769 | METHOD, DEVICE AND SYSTEM FOR MANAGING STRUCTURE DATA IN A GRAPHIC SCENE - A method is provided for restoring graphic animation content including the following steps: in a receiver terminal; transmitting a request for retrieving the content; and obtaining at least one graphic scene of the content describing at least the spatio-temporal arrangement between the graphic objects of the content. The content further includes at least one function for managing structured data allowing interaction with a database of structured data. The method further includes: querying the database, based on at least one command present in the graphic scene and associated with the functions(s) for managing structured data; obtaining structured data derived from the database; integrating the structured data in the graphic scene; and restoring the graphic scene. | 07-02-2009 |
20090174716 | Synchronized Visual and Audio Apparatus and Method - A method and apparatus for synchronizing sound with an illuminated animated image is provided. First and second image frames are defined on a planar surface using a plurality of light transmitting media. A plurality of light sources are positioned adjacent to the plurality of light transmitting media such that the first image frame and the second image frame are illuminated independently by selectively activating each light source in accordance with a pre-programmed illumination sequence. A speaker plays a first sound when the first image frame is illuminated and a second sound when the second image frame is illuminated. A driving device, coupled to the light sources and the speaker, is used to synchronize the illumination of the image frames with the sounds. | 07-09-2009 |
20090174717 | METHOD AND APPARATUS FOR GENERATING A STORYBOARD THEME FOR BACKGROUND IMAGE AND VIDEO PRESENTATION - A method and apparatus for using a non-active (background) state of a display-enabled device to animate images and video elements within a themed storyboard from selected sources. States described in the method include: tuning to select viewable content, population of the storyboard matrix, animation of storyboard elements using animation effects, such as lens flare and live video texturing, and evaporation of the elements. By way of example, the storyboard is populated with freeze frames of video content, one or a portion of which are then played to animate that element. The storyboard preferably cycles with additional content as the prior content evaporates, until the background mode is terminated. The method is particularly well-suited for use in television sets, although it can be integrated into any display-enable apparatus having a computer and access to content sources that can be used for storyboard elements. | 07-09-2009 |
20090179898 | CREATION OF MOTION BLUR IN IMAGE PROCESSING - Motion blur is created in images by utilizing a motion vector. Vertices are developed with each vertex including a motion vector. The motion vector is indicative of how far vertices have moved since a previous frame in a sequence of images. The vertices are converted to an image and motion blur is added to the image as a function of the motion vector for each vertex. | 07-16-2009 |
20090179899 | METHOD AND APPARATUS FOR EFFICIENT OFFSET CURVE DEFORMATION FROM SKELETAL ANIMATION - A method for use in animation includes establishing an influence primitive, associating the influence primitive with a model having a plurality of model points, and for each of the plurality of model points on the model, determining an offset primitive that passes through the model point. Another method includes deforming the model, and determining a deformed position of each of the plurality of model points by using a separate offset primitive for each model point. A computer readable storage medium stores a computer program adapted to cause a processor based system to execute one or more the above steps. | 07-16-2009 |
20090179900 | Methods and Apparatus for Export of Animation Data to Non-Native Articulation Schemes - A method for exporting animation data from a native animation environment to a non-native animation environment includes determining first object poses in response to a first object model in the native environment and animation variables, determining a second object model including a geometric object model, determining second object poses in response to the second object model and animation variables, determining surface errors between the first object poses and the second object poses, determining a corrective object offsets in response to the surface errors, determining actuation values associated with the corrective object offsets in response to the surface errors, determining a third object model compatible with the non-native animation environment in response to the second object of poses, the corrective offsets, and the actuation values, and storing the third object model in a memory. | 07-16-2009 |
20090184967 | SCRIPT CONTROL FOR LIP ANIMATION IN A SCENE GENERATED BY A COMPUTER RENDERING ENGINE - A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech. | 07-23-2009 |
20090184968 | Incentive Method For The Spirometry Test With Universal Control System Regardless Of Any Chosen Stimulating Image - An incentive method for the spirometry test wherein the use of two separate images instead of only one is provided, the first image being controlled by the incentive control mechanism related to respiration of the patient, and the second image, initially covered by the first image, representing the incentive to the spirometry and being completely independent both of the first image and the respiration. The first image is universal in the meaning that it can be used from time to time with a virtually infinite number of second incentive images. The first image is modified as the spirometry test proceeds to unveil gradually the second image which is an image having a meaning only if completely unveiled, the incentive effect being then the motivation to blow so that the curtain unveils completely an image which has its completeness and can be interpreted only if completely unveiled. | 07-23-2009 |
20090189906 | SCRIPT CONTROL FOR GAIT ANIMATION IN A SCENE GENERATED BY A COMPUTER RENDERING ENGINE - A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech. | 07-30-2009 |
20090195543 | Verification of animation in a computing device - Methods and systems of verifying an animation applied in a mobile device may include a timer module that is programmed to time-slice the animation into multiple scenes at predetermined time points along a timeline of the animation. A first capture module is programmed to capture actual data of each scene at each of the time points while the animation is running. A first comparison module is programmed to compare the actual data of each scene with expected data of the corresponding scene to determine whether the actual data of each scene matches the expected data of the corresponding scene. A first output module is programmed to generate a verification failure if the actual data of any scene does not match the expected data of the corresponding scene, and generate a verification success if the actual data of each scene matches the expected data of the corresponding scene. | 08-06-2009 |
20090195544 | SYSTEM AND METHOD FOR BLENDED ANIMATION ENABLING AN ANIMATED CHARACTER TO AIM AT ANY ARBITRARY POINT IN A VIRTUAL SPACE - A method for blended animation by providing a set of animation sequences associated with an animated character model is disclosed. In one embodiment, a geometric representation of a blend space is generated from the set of animation sequences using locator nodes associated with each animation sequence. A subset of animation sequences is selected from the set of animation sequences by casting a ray from a reference bone to a target through the geometric representation and selecting animation sequences that are geometrically close to the intersection of the cast ray and the geometric representation. A blend weight is determined for each member animation sequence in the selected subset of animation sequences. A blended animation is generated using the selected subset of animation sequences and the blend weights, then rendered to create a final animation. | 08-06-2009 |
20090195545 | Facial Performance Synthesis Using Deformation Driven Polynomial Displacement Maps - Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots. For synthesis, the polynomial displacement maps can be driven with new motion capture data. | 08-06-2009 |
20090195546 | IMAGE DISTRIBUTION APPARATUS, IMAGE DISTRIBUTION METHOD, AND IMAGE DISTRIBUTION PROGRAM - In order to prevent a duplicate of a still image from being generated, an MFP includes an image obtaining portion to obtain one or more still images, a moving image generating portion to generate a moving image in which the obtained still images are displayed sequentially, and a distribution portion to perform real-time streaming distribution of the moving image in response to a request from a PC connected to a network. | 08-06-2009 |
20090201297 | ELECTRONIC DEVICE WITH ANIMATED CHARACTER AND METHOD - An electronic device may display an animated character on a display and, when presence of a user is detected, the character may appear to react to the user. The character may be a representation of a person, an animal or other object. Ascertaining when the user is looking at the display may be accomplished by analyzing a video data stream generated by an imaging device, such as a camera used for video telephony. | 08-13-2009 |
20090201298 | System and method for creating computer animation with graphical user interface featuring storyboards - Systems, methods, and computer readable media for customizing a computer animation. A custom animation platform prepares a storyboard including at least one customizable storyboard item and one or more replacement storyboard items configured to replace the customizable storyboard item. Then, the custom animation platform sends the storyboard and the replacement storyboard items to an interactive device via a network to thereby cause a user of the device to select one of the replacement storyboard items. The custom animation platform receives user data including the user's selection from the device and generates a computer animation based on the user data. | 08-13-2009 |
20090207175 | Animation Using Animation Effect and Trigger Element - Among other disclosed subject matter, a computer-implemented method for animating an image element includes determining that a trigger element defined by a trigger element occurs. The method includes, in response to the trigger element, applying an animation effect to a group that comprises at least one image element. A first association between the animation effect and the group is configured for another animation effect to selectively be associated with the group, and a second association between the trigger element and the animation effect is configured for another trigger element to selectively be associated with the animation effect. | 08-20-2009 |
20090213123 | Method of using skeletal animation data to ascertain risk in a surveillance system - The present invention discloses a method of surveillance comprising the steps of matching skeletal animation data representative of recorded motion to a pre-defined animation. The pre-defined animation is associated with a risk value. An end-user is also provided with at least the recorded motion as well as a risk value. The method may be carried out in real time and the skeletal animation data may be three-dimensional. | 08-27-2009 |
20090213124 | Method of Displaying Product and Service Performance Data - An entertaining and informative method of displaying competitive product performance data is disclosed. The various embodiments include a method for displaying product performance data by use of animated contests between animated representatives of competing products. The contest results are relative to selected product test results. The relationship between the test results and the contest results is a mathematical approximation. Thus, a gross disparity in the displayed animated contest is indicative of a gross disparity in the performance of the products on the test. Likewise, a closely fought contest in the displayed animated contest is indicative of close performance of the products on the test. | 08-27-2009 |
20090219292 | SYSTEMS AND METHODS FOR SPECIFYING ARBITRARY ANIMATION CONTROLS FOR MODEL OBJECTS - Systems and methods for defining or specifying an arbitrary set of one or more animation control elements or variables (i.e., “avars”), and for associating the set with a model object or part of a model object. Once a set of avars (“avarset”) is associated with an object model, a user is able to select that model or part of the model, and the avarset associated with that part of the model is made available to, or enabled for, any animation tool that affords avar editing capabilities or allows manipulation of the model using animation control elements. This enables users to create and save sets of avars to share between characters, or other objects, and shots. In certain embodiments, the user can associate multiple avarsets with a model part and can designate one of those sets as “primary” so that when that model part is selected, the designated primary avarset is broadcast to the available editing tools. Additionally, the user can override the primary designation set and select one of the other sets of avars, or the user can cycle through the various associated avarsets. | 09-03-2009 |
20090219293 | CELLULAR TELEPHONE SET AND CHARACTER DISPLAY PRESENTATION METHOD TO BE USED IN THE SAME - A cellular telephone set can increase number of display patterns of animation display without occupying large storage region in the memory and without performing setting operation every time. The character presentation means determines character to be displayed in each event screen upon depression of call release button after telephone calling, depression of call release button after telephone call reception, upon occurrence of at least one of presence of not responded call and newly received mail, and upon variation of state between open state and closed state of the first and second casings, depending upon calling history, time of calling, call arriving history, time of call arrival, and timing of detection of variation of state between open state and closed state of the first and second casing by the detecting means. | 09-03-2009 |
20090231346 | Diagnostic System for Visual Presentation, Animation and Sonification for Networks - A diagnostic system for visual representation, animation and sonification for networks that requires far less knowledge and can be used even by experts to reduce the time for analysis since it makes pattern analysis much more possible. | 09-17-2009 |
20090231347 | Method and Apparatus for Providing Natural Facial Animation - Natural inter-viseme animation of 3D head model driven by speech recognition is calculated by applying limitations to the velocity and/or acceleration of a normalized parameter vector, each element of which may be mapped to animation node outputs of a 3D model based on mesh blending and weighted by a mix of key frames. | 09-17-2009 |
20090237411 | Lightweight Three-Dimensional Display - A computer-implemented imaging process method includes generating a progression of images of a three-dimensional model and saving the images at a determined location, generating mark-up code for displaying image manipulation controls and for permitting display of the progression of images in response to user interaction with the image manipulation controls, and providing the images and mark-up code for use by a third-party application. | 09-24-2009 |
20090244071 | Synthetic image automatic generation system and method thereof - Provided is a computer system and a computerized method to automatically generate the synthetic images that simulate the human activities in a particular environment. The program instructions are input in the form of the natural language. Particular columns are provided in the user interface to allow the user to select desired instruction elements from sets of limited candidates. The instruction elements form the program instructions. The system analyzes the program instructions to obtain the standard predetermined time evaluation codes of the instructions. Parameters not include in the input program instructions are generated automatically. Synthetic images are generated by using the input program instructions and the parameters obtained. | 10-01-2009 |
20090251468 | ANIMATING OF AN INPUT-IMAGE TO CREATE PERSONAL WORLDS - The present invention discloses a system and a method for creating a personal animated world of a user by automatically animating an input-image such as a drawing of an animal inputted by the user. | 10-08-2009 |
20090251469 | METHOD FOR DETECTING COLLISIONS AMONG LARGE NUMBERS OF PARTICLES - A method for detecting object collisions in a simulation, which includes identifying a plurality of objects moving along a path within a simulation area, and defining a grid comprising defined regions which individually define a region within which any of the plurality of objects could potentially occupy. For each of the objects, the method further includes identifying which of the defined regions that each of the plurality of object occupies for at least a portion of a time step, and for each of the objects, determining an associated potential collision set by identifying objects of the plurality of objects which occupy common regions of the defined regions during any portion of the time step. In addition, for each of the objects, the method further includes determining an actual collision set comprising objects with which a given object will collide during the time step based upon location parameters of objects included in the potential collision set. | 10-08-2009 |
20090251470 | System and method for compressing a picture - A system for compressing a picture includes: an information extraction unit for extracting information needed for encoding during a picture scene composition and animation process using a modeling object; and a rendering unit for generating an uncompressed picture sequence by rendering the object where the picture scene composition and animation process is performed. Further, the system for compressing a picture includes an encoding unit for generating a compressed bit stream by encoding the picture sequence from the rendering unit based on the information extracted by the information extraction unit. | 10-08-2009 |
20090262116 | MULTI-LAYERED SLIDE TRANSITIONS - Architecture that enhances the visual experience of a slide presentation by animating slide content as “actors” in the same background “scene”. This is provided by multi-layered transitions between slides, where a slide is first separated into “layers” (e.g., with a level of transparency). Each layer can then be transitioned independently. All layers are composited together to accomplish the end effect. The layers can comprise one or more content layers, and a background layer. The background layer can further be separated into a background graphics layer and a background fill layer. The transition phase can include a transition effect such as a fade, a wipe, a dissolve effect, and other desired effects. To provide the continuity and uniformity of presentation the content on the same background scene, a transition effect is not applied to the background layer. | 10-22-2009 |
20090262117 | Displaying traffic flow data representing traffic conditions - An article of manufacture for displaying traffic flow data representing traffic conditions on a road system includes creating a graphical map of the road system which includes one or more segments. The status of each segment on the graphical map is determined such that the status of each segment corresponds to the traffic flow data associated with that segment. An animated traffic flow map of the road system is created by combining the graphical map and the status of each segment. | 10-22-2009 |
20090262118 | METHOD, SYSTEM AND STORAGE DEVICE FOR CREATING, MANIPULATING AND TRANSFORMING ANIMATION - An animation method, system, and storage device which takes animators submissions of characters and animations and breaks the animations into segments where discontinuities will be minimized; allows users to assemble the segments into new animations; allows users to apply modifiers to the characters; provides a semantic restraint system for virtual objects; and provides automatic character animation retargeting. | 10-22-2009 |
20090262119 | OPTIMIZATION OF TIME-CRITICAL SOFTWARE COMPONENTS FOR REAL-TIME INTERACTIVE APPLICATIONS - A method for optimizing the performance of a time-critical computation component for a real-time interactive application includes the use of an algorithm having a precise logical thread and at least one fast estimation logical thread. The computation errors generated by the fast estimation thread are imperceptible by humans and are frame specific such that the errors are corrected within a graphical frame's time by the data from the precise logical thread. | 10-22-2009 |
20090267948 | OBJECT BASED AVATAR TRACKING - A computer implemented method, apparatus, and computer program product for object based avatar tracking. In one embodiment, a range for an object in a virtual universe is identified. The range comprises a viewable field of the object. Avatars in the viewable field of the object are capable of viewing the object. Avatars outside the viewable field of the object are incapable of viewing the object. In response to an avatar coming within the range of the object, an object avatar rendering table is queried for a session associated with the avatar unique identifier and the object unique identifier. The object avatar rendering table comprises a unique identifier of a set of selected objects and unique identifiers for each avatar in a range of a selected object in the set of selected objects. An object initiation process associated with the object is triggered. | 10-29-2009 |
20090267949 | Spline technique for 2D electronic game - A technique for generating splines in two dimensions for use in electronic game play is disclosed. The technique includes generating a computer graphic of a shape to be animated that is formed by one or more splines. The shape also includes at least one joint. When the position or orientation of the joint is changed, the orientation and/or position of the splines corresponding to the joint are changed resulting in changes to the shape. | 10-29-2009 |
20090278851 | METHOD AND SYSTEM FOR ANIMATING AN AVATAR IN REAL TIME USING THE VOICE OF A SPEAKER - This is a method and a system for animating on a screen ( | 11-12-2009 |
20090284533 | Method of rendering an image and a method of animating a graphics character - A computer implemented method of generating behavior of a graphics character within an environment including a selected graphics character and one or more graphics elements, the method comprising: generating an image of the environment from a perspective of the selected graphics character; processing the image using an artificial intelligence engine with one or more layers to determine an activation value for the graphics character wherein at least one of the layers is a fuzzy processing layer, and generating the behavior of the graphics character based on the activation value. | 11-19-2009 |
20090289944 | IMAGE PROCESSING APPARATUS, IMAGE OUTPUTTING METHOD, AND IMAGE OUTPUTTING PROGRAM EMBODIED ON COMPUTER READABLE MEDIUM - In order to enable a still image to be checked while preventing leakage of confidential information contained in the still image, an MFP includes: an image acquiring portion to acquire a still image; an encoding portion to generate encoded data by encoding the acquired still image using an encoding key stored in advance; a decoding portion to decode the encoded data using the encoding key or a decoding key corresponding to the encoding key; and a transmitting portion to externally output the decoded still image in an electronically non-recordable form. | 11-26-2009 |
20090295806 | Dynamic Scene Descriptor Method and Apparatus - A method for rendering a frame of animation includes retrieving scene descriptor data that specifies at least one object, wherein the object is associated with a first database query, wherein the first database query is associated with a first rendering option, receiving a selection of the first rendering option or a second rendering option, querying a database with the first database query and receiving a first representation of the object from a database when the selection is of the first rendering option, loading the first representation of the object into computer memory when the selection is of the first rendering option, and rendering the object for the frame of animation using the first representation of the object when the selection is of the first rendering option, wherein the first representation of the object is not loaded into computer memory when the selection is of the second rendering option. | 12-03-2009 |
20090309881 | COPYING OF ANIMATION EFFECTS FROM A SOURCE OBJECT TO AT LEAST ONE TARGET OBJECT - A method and a processing device may be provided for copying animation effects of a source object to one or more target objects of a presentation. The source object and the target objects may be included in presentation templates, or presentation slides of presentation files. The one or more target objects may be included in a same presentation slide as the source object, a different presentation slide as the source object, a same presentation file as the source object, a different presentation file as a source object, a same presentation template as a source object, or a different presentation template as the source object. Animation effects that are supported by a target object may be copied from the source object to the target object. When copying one or more animation effects from the source object to multiple target objects, timing of the animation effects may be serial or concurrent. | 12-17-2009 |
20090315893 | USER AVATAR AVAILABLE ACROSS COMPUTING APPLICATIONS AND DEVICES - An avatar along with its accessories, emotes, and animations may be system provided and omnipresent. In this manner, the avatar and its accessories, emotes, and animations may be available across multiple environments provided or exposed by multiple avatar computing applications, such as computer games, chats, forums, communities, or instant messaging services. An avatar system may change the avatar and its accessories, emotes, and animations, e.g. pursuant to a request from the user, instructions from an avatar computing application, or updates provided by software associated with a computing device. The avatar and its accessories, emotes, and animations may be changed by a system or computing application associated with a computing device outside of a computer game or computing environment in which the avatar may be rendered or used by the user. | 12-24-2009 |
20090315894 | BROWSER-INDEPENDENT ANIMATION ENGINES - Tools and techniques are described for browser-independent animation engines. These animation engines may include browser-independent animation objects that represent entities that may be animated within a browser. These animation objects may define animation attributes, with the animation attributes being associated with attribute values that describe aspects of the entity. The animation attributes may also be associated with animation evaluators that define how the attribute value changes over time. These animation engines may also include a browser-specific layer for interpreting the attribute values into instructions specific to the browser. | 12-24-2009 |
20090315895 | PARAMETRIC FONT ANIMATION - Font animation technique embodiments are presented which animate alpha-numeric characters of a message or document. In one general embodiment this is accomplished by the sender transmitting parametric information and animation instructions pertaining to the display of characters found in the message or document to a recipient. The parametric information identifies where to split the characters and where to rotate the resulting sections. The sections of each character affected are then translated and/or rotated and/or scaled as dictated by the animation instructions to create an animation over time. Additionally, if a gap in a stroke of an animated character exists between the sections of the character, a connecting section is displayed to close the stroke gap making the character appears contiguous. | 12-24-2009 |
20090315896 | ANIMATION PLATFORM - An animation platform for managing the interpolation of values of one or more animation variables from one or more applications. The animation platform uses animation transitions to interpolate the values of the animation variables. When conflicts arise, the animation platform implements application-supplied logic to determine an execution priority of the conflicting animation transitions. | 12-24-2009 |
20090315897 | ANIMATION PLATFORM - An animation platform for managing the interpolation of values of one or more animation variables from one or more applications. The animation platform uses animation transitions to interpolate the values of the animation variables. The animation platform uses a continuity parameter to smoothly switch from one animation transition to the next. | 12-24-2009 |
20090315898 | PARAMETER CODING PROCESS FOR AVATAR ANIMATION, AND THE DECODING PROCESS, SIGNAL, AND DEVICES THEREOF - The invention relates to a method for coding animation parameters for a character (A) with which are associated morphological values, said animation parameters comprising a translation parameter associated with at least one part of said character (A), characterized in that to code an intrinsic translation of said part of said character (A) by a translation vector, said translation parameter contains a value which is dependent on said vector and on one of said morphological values. | 12-24-2009 |
20090322760 | Dynamic animation scheduling - Dynamic animation scheduling techniques are described in which application callbacks are employed to permit dynamic scheduling of animations. An application may create a storyboard that defines an animation as transitions applied to a set of variables. The storyboard may be communicated to an animation component configured to schedule the storyboard. The animation component may then communicate one or more callbacks at various times to the application that describe a state of the variables. Based on the callbacks, the application may specify changes, additions, deletions, and/or other modifications to dynamically modify the storyboard. To draw the animation, the application may communicate a get variable values command to the animation component. The animation component performs calculations to update the variable values based on the storyboard and communicates the results to the application. The application may then cause output of the animation defined by the storyboard. | 12-31-2009 |
20090322761 | APPLICATIONS FOR MOBILE COMPUTING DEVICES - A sequence of images is displayed in response to user input, such as an answer to a question, a touch and drag operation, a tap operation or shaking of a mobile device. The images may be displayed in an order determined by a direction implied by the user input, and may be accompanied by music. The display of the sequence of images may continue for a time determined by the shaking of the device prior to commencement of the display of the sequence of images. The sequence of images may depict a common constituent in successively different poses or states. | 12-31-2009 |
20090322762 | Animated performance tool - A performance tool comprises a program that configures metrics into animated scenarios and at least one display that displays the animated scenarios. The animated scenarios illustrate measurable inputted data from multiple sets of data that are juxtaposed with one another. | 12-31-2009 |
20100007665 | Do-It-Yourself Photo Realistic Talking Head Creation System and Method - A do-it-yourself photo realistic talking head creation system comprising: a template; handheld device comprising display and video camera having an image output signal of a subject; a computer having a mixer program for mixing the template and image output signal of the subject into a composite image, and an output signal representational of the composite image; a computer adapted to communicate the composite image signal to the display for display to the subject as a composite image; the display and the video camera adapted to allow the video camera to collect the image of the subject, the subject to view the composite image, and the subject to align the image of the subject with the template; storage means having an input for receiving the output signal of the video camera representational of the collected image of the subject and storing the image of the subject substantially aligned with the template. | 01-14-2010 |
20100013836 | Method and apparatus for producing animation - Provided are a method and apparatus for interactively producing an animation. A user-level contents, for producing an animation, may be queried for and received from a user, and a video script representing the animation may be created by using the user-level contents based on regulation information and animation direction knowledge. An image of the animation may then be output by the playing the video script. | 01-21-2010 |
20100013837 | Method And System For Controlling Character Animation - Embodiments of the present invention provide a method for controlling character animation, in which the character animation includes at least two bones and skins corresponding to the bones, the method includes: (a) dividing the character animation into at least two parts, and setting an identification number for each part; (b) establishing a mapping table comprising a corresponding relationship between the identification number and skin data of each part; (c) picking skin data of an operation focus location in the character animation; (d) querying the mapping table according to the skin data, obtaining a corresponding identification number, and controlling the part in the character animation corresponding to the identification number. Embodiments of the present invention also provide a system for controlling character animation. Different parts of the character animation may be picked respectively by dividing the character animation into multiple parts. | 01-21-2010 |
20100026688 | GRAPHICAL WIND GUAGE - A wind gauge display apparatus comprising a control device and a reconfigurable display for displaying a first visual representation of a wind gauge if a wind angle is within a first range and displaying a second visual representation of the wind gauge if the wind angle is within a second range. The angles displayed on the reconfigurable display may be determined by input from a user. On the reconfigurable display, a location of a visual indicator of wind speed may be different in the first visual representation of the wind gauge than in the second visual representation of the wind gauge. The wind gauge display apparatus may also comprise a sensor for determining wind angle and wind speed. | 02-04-2010 |
20100033488 | Example-Based Motion Detail Enrichment in Real-Time - An approach to enrich skeleton-driven animations with physically-based secondary deformation in real time is described. To achieve this goal, the technique described employs a surface-based deformable model that can interactively emulate the dynamics of both low- and high-frequency volumetric effects. Given a surface mesh and a few sample sequences of its physical behavior, a set of motion parameters of the material are learned during an off-line preprocessing step. The deformable model is then applicable to any given skeleton-driven animation of the surface mesh. Additionally, the described dynamic skinning technique can be entirely implemented on GPUs and executed with great efficiency. Thus, with minimal changes to the conventional graphics pipeline, the technique can drastically enhance the visual experience of skeleton-driven animations by adding secondary deformation in real time. | 02-11-2010 |
20100039433 | VISUALIZATION EMPLOYING HEAT MAPS TO CONVEY QUALITY, PROGNOSTICS, OR DIAGNOSTICS INFORMATION - A visualization system for creating, displaying and animating overview and detail heat map displays for industrial automation. The visualization system connects the heat map displays to an interface component providing manual or automatic input data from an industrial process or an archive of historical industrial process input data. The animated heat map displays providing quality, prognostic or diagnostic information. | 02-18-2010 |
20100039434 | Data Visualization Using Computer-Animated Figure Movement - Methods, systems, and apparatus, including medium-encoded computer program products, can provide data visualization using computer animated figure movement. A computer animated figure is associated with a data stream. A set of movements to be performed by the computer animated figure in response to one or more data characteristics of the data stream is assigned. The data stream is received and processed to determine the one or more data characteristics. The computer animated figure is animated according to the assigned set of movements in response to determining the one or more data characteristics. | 02-18-2010 |
20100045680 | PERFORMANCE DRIVEN FACIAL ANIMATION - A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation. | 02-25-2010 |
20100053172 | MESH TRANSFER USING UV-SPACE - Mesh data and other proximity information from the mesh of one model can be transferred to the mesh of another model, even with different topology and geometry. A correspondence can be created for transferring or sharing information between points of a source mesh and points of a destination mesh. Information can be “pushed through” the correspondence to share or otherwise transfer data from one mesh to its designated location at another mesh. Correspondences can be created based on parameterization information, such as UV sets, one or more maps, harmonic parameterization, or the like. A collection of “feature curves” may be inferred or user-placed to partition the source and destination meshes into a collection of “feature regions” resulting in partitions or “feature curve networks” for constructing correspondences between all points of one mesh and all points of another mesh. | 03-04-2010 |
20100060646 | ARBITRARY FRACTIONAL PIXEL MOVEMENT - A technique is provided for displaying pixels of an image at arbitrary subpixel positions. In accordance with aspects of this technique, interpolated intensity values for the pixels of the image are derived based on the arbitrary subpixel location and an intensity distribution or profile. Reference to the intensity distribution provides appropriate multipliers for the source image. Based on these multipliers, the image may be rendered at respective physical pixel locations such that the pixel intensities are summed with each rendering, resulting in a destination image having suitable interpolated pixel intensities for the arbitrary subpixel position. | 03-11-2010 |
20100060647 | Animating Speech Of An Avatar Representing A Participant In A Mobile Communication - Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme. | 03-11-2010 |
20100066745 | Face Image Display, Face Image Display Method, and Face Image Display Program - The present invention provides a facial image display apparatus that can display moving images concentrated on the face when images of people's faces are displayed. A facial image display apparatus is provided wherein a facial area detecting unit ( | 03-18-2010 |
20100066746 | Widgetized avatar and a method and system of creating and using same - A widgetized avatar and a method and system of creating and using same is disclosed. The avatar includes computing code that provides for addition of the avatar as non-static content to at least two unique at least partially static web pages, and secondary computing code resident within the computing code, wherein the secondary computing code provides for association with at least one other portion of the computing code of ones selected from a plurality of physical characteristics, a plurality of personal information, and a plurality of actions. | 03-18-2010 |
20100073379 | METHOD AND SYSTEM FOR RENDERING REAL-TIME SPRITES - A method and system for improving rendering performance at a client. The method includes, responsive to an initial request for a first animation sequence at a client, downloading a first 3D object from a server. The method includes rendering the first 3D object into the first animation sequence. The method includes displaying the first animation sequence to a user. The method includes caching the first animation sequence in an accessible memory. The method includes, responsive to a repeat request for the first animation sequence, retrieving the cached first animation sequence from the accessible memory. | 03-25-2010 |
20100073380 | Method of operating a design generator for personalization of electronic devices - A method of generating a customized image includes forming a first design including a first pattern having a first color and a second color. The method also includes receiving input from a user using a design modification element. The method further includes forming a second design including a second pattern including a third color and a fourth color. A change from the first design to the second design is proportional to the input received from the user using the design modification element. | 03-25-2010 |
20100073381 | Methods for generating one or more composite image maps and systems thereof - A method, computer readable medium, and system for generating a composite image map includes obtaining a plurality of sprites for an application page and determining coordinates of each of the obtained plurality of sprites. A composite image map is generated based on the obtained plurality of sprites and the determined coordinates. | 03-25-2010 |
20100073382 | SYSTEM AND METHOD FOR SEQUENCING MEDIA OBJECTS - A method of displaying a long animation is provided. The animation is defined in an animation file, which identifies a set of images that form the animation when sequentially displayed. A batch processor segments the set of images into sequential subsets, with each subset sized smaller than a maximum size. In this way, all of the images identified in a particular subset may be loaded into memory. Each subset of images is associated with a respective segment identifier, and an instruction is provided along with the images to order the subsets. In this way, a first subset of images provides for the loading of a second subset of images, thereby enabling the display of long animations. | 03-25-2010 |
20100079466 | ASYNCHRONOUS STREAMING OF DATA FOR VALIDATION - The present invention relates to computer capture of object motion. More specifically, embodiments of the present invention relate to capturing of facial movement or performance of an actor. Embodiments of the present invention provide a head-mounted camera system that allows the movements of an actor's face to be captured separately from, but simultaneously with, the movements of the actor's body. In some embodiments of the present invention, a method of motion capture of an actor's performance is provided. A self-contained system is provided for recording the data, which is free of tethers or other hard-wiring, is remotely operated by a motion-capture team, without any intervention by the actor wearing the device. Embodiments of the present invention also provide a method of validating that usable data is being acquired and recorded by the remote system. | 04-01-2010 |
20100085363 | Photo Realistic Talking Head Creation, Content Creation, and Distribution System and Method - A system and method for creating, distributing, and viewing photo-realistic talking head based multimedia content over a network, comprising a server and a variety of communication devices, including cell phones and other portable wireless devices, and a software suite, that enables users to communicate with each other through creation, use, and sharing of multimedia content, including photo-realistic talking head animations combined with text, audio, photo, and video content. Content is uploaded to at least one remote server, and accessed via a broad range of devices, such as cell phones, desktop computers, laptop computers, personal digital assistants, and cellular smartphones. Shows comprising the content may be viewed with a media player in various environments, such as internet social networking sites and chat rooms via a web browser application, or applications integrated into the operating systems of the digital devices, and distributed via the internet, cellular wireless networks, and other suitable networks. | 04-08-2010 |
20100097384 | PROGRAM DESIGNED MASTER ANIMATION AND METHOD FOR PRODUCING THEREOF - Disclosed is a PDMA animation production method including the steps of storing animation materials constituting an animation and information separately when a PDMA is produced, the animation materials including texts, graphics, movies, and audios; partitioning frame information as desired, the frame information being construction units of the animation; separating the partitioned frame information into respective information; storing animation information together with information regarding texts, graphics, movies, and audios constituting the animation while interworking with a DB program, the animation information including the frame information; interpreting information stored in the DB program by the PDMA; and retrieving animation sources matching with the interpreted information and combining corresponding data by the PDMA to play the animation. | 04-22-2010 |
20100103178 | SPATIALLY-AWARE PROJECTION PEN - One embodiment of the present invention sets forth a technique for providing an end user with a digital pen embedded with a spatially-aware miniature projector for use in a design environment. Paper documents are augmented to allow a user to access additional information and computational tools through projected interfaces. Virtual ink may be managed in single and multi-user environments to enhance collaboration and data management. The spatially-aware projector pen provides end-users with dynamic visual feedback and improved interaction capabilities. | 04-29-2010 |
20100110081 | SOFTWARE-AIDED CREATION OF ANIMATED STORIES - Software-assistance that allows a child or other author to generate a story. The author may generate their own content and add that author-generated content to the story. For instance, the author could drawn their own background, background items, and/or characters. These drawn items could even be added to a library so that they could be reused in other stories. The author can define their own animations associated with characters and background items, rather than selecting predefined animations. The story timeline may also keep track of events that are caused by the author interacting with the story in particular ways, and that represents significant story changes. The author may then jump to these navigation points to delete the event thereby removing the effects of the story change. | 05-06-2010 |
20100110082 | Web-Based Real-Time Animation Visualization, Creation, And Distribution - The subject matter disclosed herein provides methods and apparatus, including computer program products, for generating animations in real-time. In one aspect there is provided a method. The method may include generating an animation by selecting one or more clips, the clips configured to include a first state representing an introduction, a second state representing an action, and a third state representing an exit, the first state and the third state including the substantially the same frame, such that the character appears in the same position in the frame and providing the generated animation for presentation at a user interface. Related systems, apparatus, methods, and/or articles are also described. | 05-06-2010 |
20100118033 | SYNCHRONIZING ANIMATION TO A REPETITIVE BEAT SOURCE - An animated dance is made up of a plurality of frames. The dance includes a plurality of different moves delineated by a set of synchronization point. A total number of frames for the video track is determined and a corresponding video track is generated such that the resulting video track is synchronize at the synchronization points to beats of the audio track. | 05-13-2010 |
20100118034 | APPARATUS AND METHOD OF AUTHORING ANIMATION THROUGH STORYBOARD - An animation authoring apparatus and method of authoring an animation including a storyboard editor to provide a storyboard editing screen, to interact with a user to edit a storyboard, and to store the edited storyboard, a parser to parse syntax of the edited storyboard, and a rendering engine to convert the edited storyboard into a graphic animation based on the parsed syntax of the edited storyboard. | 05-13-2010 |
20100118035 | MOVING IMAGE GENERATION METHOD, MOVING IMAGE GENERATION PROGRAM, AND MOVING IMAGE GENERATION DEVICE - A moving image generation method includes: a content designation step of designating a plurality of contents used for a moving image; a content collecting step of collecting each designated content; a content image generation step of generating content images based on the collected contents; a display mode setting step of setting a display mode of each generated content image; and a moving image generation step of generating a moving image where each content image alters with respect to time in accordance with the display mode which has been set. | 05-13-2010 |
20100118036 | APPARATUS AND METHOD OF AUTHORING ANIMATION THROUGH STORYBOARD - Described herein is an animation authoring apparatus and method thereof for authoring an animation. The apparatus includes a storyboard editor that provides a storyboard editing display that a user may interact with to edit a storyboard, and to store the edited storyboard. The apparatus further includes a parser to parse syntax of the edited storyboard, and a rendering engine to convert the edited storyboard into a graphic animation based on the parsed syntax of the edited storyboard. | 05-13-2010 |
20100118037 | OBJECT-AWARE TRANSITIONS - Techniques for accomplishing slide transitions in a presentation are disclosed. In accordance with these techniques, each object on a slide is individually manipulable during slide transitions. In certain embodiments, the presence of an object on both the outgoing and incoming slides may be taken into account during slide transition. Likewise, in certain embodiments, derivative objects, such as shadows or reflections, may be handled as distinct objects in generating a transition between slides. | 05-13-2010 |
20100123722 | SYSTEMS AND METHODS INVOLVING GRAPHICALLY DISPLAYING CONTROL SYSTEMS - A method for displaying a control system comprising, receiving a function block diagram file including a function block having an associated logic function, receiving an animation instruction associated with the function block, receiving system data from a system controller, receiving a first graphic associated with the logic function from a function block library, processing the first graphic and the system data according to the animation instruction to render an updated first graphic reflecting the systems data, and displaying the function block and the rendered updated first graphic associated with the logic function. | 05-20-2010 |
20100123723 | SYSTEM AND METHOD FOR DEPENDENCY GRAPH EVALUATION FOR ANIMATION - Aspects include systems, devices, and methods for evaluating a source dependency graph and animation curve with a game system. The dependency graph may be evaluated at an interactive rate during execution of a game. The animation curve may describe change in a state of a control element over time. Subnetworks of the dependency graph may be identified and evaluated using a plurality of processors. | 05-20-2010 |
20100123724 | Portable Touch Screen Device, Method, and Graphical User Interface for Using Emoji Characters - In some embodiments, a computer-implemented method performed at a portable electronic device with a touch screen display includes simultaneously displaying a character input area operable to display text character input and emoji character input selected by a user, a keyboard display area, and a plurality of emoji category icons. In response to detecting a gesture on a respective emoji category icon, the method also includes simultaneously displaying: a first subset of emoji character keys for the respective emoji category in the keyboard display area and a plurality of subset-sequence-indicia icons for the respective emoji category. The method also includes detecting a gesture in the keyboard display area and, in response: replacing display of the first subset of emoji character keys with display of a second subset of emoji character keys for the respective emoji category, and updating the information provided by the subset-sequence-indicia icons. | 05-20-2010 |
20100128042 | SYSTEM AND METHOD FOR CREATING AND DISPLAYING AN ANIMATED FLOW OF TEXT AND OTHER MEDIA FROM AN INPUT OF CONVENTIONAL TEXT - A system and method for generating and displaying text on a screen as an animated flow from a digital input of conventional text. The Invention divides text into short-scan lines of coherent semantic value that progressively animate from invisible to visible and back to invisible. Multiple line displays are frequent. The effect is aesthetically engaging, perceptually focusing, and cognitively immersive. The reader watches the text like watching a movie. The Invention may exist in whole or in part as a standalone application on a specific screen device. The Invention includes a manual authoring tool that allows the insertion of non-text media such as sound, image, and advertisements. | 05-27-2010 |
20100134499 | STROKE-BASED ANIMATION CREATION - A method, apparatus, and computer-readable medium are provided that allow a user to easily generate and play back animation on a computing device. A user can use a mouse, stylus, or finger to draw a stroke indicating a path and speed with which a graphical object should be moved during animation playback. The graphical object may comprise a cartoon character, drawing, or other type of image. In a sequential mode, separate tracks are provided for each graphical object, and the objects move along tracks sequentially (one at a time). In a synchronous mode, graphical objects move along tracks concurrently. Different gestures can be automatically selected for the graphical object at each point along the track, allowing motion to be simulated visually. | 06-03-2010 |
20100141661 | CONTENT GENERATION SYSTEM, CONTENT GENERATION DEVICE, AND CONTENT GENERATION PROGRAM - A content generation system includes a host terminal and an encode terminal. The host terminal has: a lecture material display unit for displaying a lecture material on a desk top; and a desk top image transmission unit for transmitting a desk top image. The encode terminal has: a lecturer imaging data generation unit which generates lecturer imaging data by capturing a lecture performed by the lecturer; an animation data generation unit which generates animation data from the image on the desk top received from the host terminal in synchronization with the lecturer imaging data; and a content data transmission unit which transmits content data containing the lecturer imaging data and the animation data. | 06-10-2010 |
20100141662 | COMMUNICATION NETWORK AND DEVICES FOR TEXT TO SPEECH AND TEXT TO FACIAL ANIMATION CONVERSION - A communication system comprises a sending device, a receiving device and a network which connects the sending device to the receiving device. The sending device comprises at least one user operable input for entering a sequence of textual characters. as a message and transmission means for sending the message across the network. The receiving device comprises a memory which stores a plurality of head images, each one being associated with a different sending device and comprising an image of a head viewed from the front, receiver means for receiving the message comprising the sequence of textual characters, text to speech converting means for converting the text characters of the message into an audio message corresponding to the sequence of text characters and animating means for generating an animated partial 3D image of a head from the head image stored in the memory which is associated with the sender of the message. The animating means animates at least one facial feature of the head, the animation corresponding the movements made by the head when reading the message. A display displays the animated partial 3D head; and a loudspeaker outputs the audio message in synchronisation with the displayed head. | 06-10-2010 |
20100141663 | SYSTEM AND METHODS FOR DYNAMICALLY INJECTING EXPRESSION INFORMATION INTO AN ANIMATED FACIAL MESH - A system and method for modifying facial animations to include expression and microexpression information is disclosed. Particularly, a system and method for applying actor-generated expression data to a facial animation, either in realtime or in storage is disclosed. Present embodiments may also be incorporated into a larger training program, designed to train users to recognize various expressions and microexpressions. | 06-10-2010 |
20100149191 | SYSTEM FOR VIRTUALLY DRAWING ON A PHYSICAL SURFACE - The system ( | 06-17-2010 |
20100156910 | SYSTEM AND METHOD FOR MESH STABILIZATION OF FACIAL MOTION CAPTURE DATA - A method and system for removing head motion from facial motion capture data. The method includes receiving a set of measured points of a target model, wherein each point is associated with coordinates in a 3D space. The method includes computing an optimal affine transformation function. The computing includes selecting an unprocessed point from the set of measured points. The computing includes selecting two nearby neighboring points of the unprocessed point. The computing includes computing an affine transformation function that minimizes an L2-norm error. The computing includes identifying the optimal affine transformation function from a set of computed affine transformation functions. The method includes displaying an aligned target model and reference model utilizing the optimal affine transformation function. The method includes outputting the optimal affine function to a computer-readable storage medium. | 06-24-2010 |
20100164959 | RENDERING A VIRTUAL INPUT DEVICE UPON DETECTION OF A FINGER MOVEMENT ACROSS A TOUCH-SENSITIVE DISPLAY - A method comprises a processor detecting a person's finger moving across an unrendered portion of a touch-sensitive display. As a result of detecting the finger moving, the method further comprises the processor causing data to be rendered as a virtual keyboard image on the display. | 07-01-2010 |
20100164960 | Character Display, Character Displaying Method, Information Recording Medium, and Program - A character display for attracting user interest by increasing the variety of on-screen display while reducing data processing by making the time variation of posture common among a plurality of characters. The character display ( | 07-01-2010 |
20100182324 | DISPLAY APPARATUS AND DISPLAY METHOD FOR PERFORMING ANIMATION OPERATIONS - A display apparatus and a displaying method of the same are provided. The display apparatus includes: a display unit; a detector which detects a user's motion; a signal processor; and a controller which controls the signal processor to display on the display unit an animation operation related to a still image if the still image is being displayed on the display unit and the user's motion is detected by the detector. | 07-22-2010 |
20100182325 | APPARATUS AND METHOD FOR EFFICIENT ANIMATION OF BELIEVABLE SPEAKING 3D CHARACTERS IN REAL TIME - An apparatus for animating a moving and speaking enhanced-believability, character in real time, comprising a plurality of behavior generators, each for defining a respective aspect of facial behavior, a unifying scripter, associated with the behavior generators, the scripter operable to combine the behaviors into a unified animation script, and a renderer, associated with the unifying scripter, the renderer operable to render the character in accordance with the script, thereby to enhance believability of the character. | 07-22-2010 |
20100182326 | RIGGING FOR AN ANIMATED CHARACTER MOVING ALONG A PATH - In computer enabled key frame animation, a method and associated system for rigging a character so as to provide a large range of motion with great fluidity of motion. The rigging uses a character body that moves along a path or freely as needed. The nodes in the body and path are not physically connected but are linked for performing a particular task. This task driven behavior of the nodes which may allow them to re-organize themselves in different re-configurations in order to perform a common duty, implies a variable geometry to the entire dynamic structure. To some regard the nodes can be said to be intelligent. | 07-22-2010 |
20100182327 | Method and System for Processing Picture - Embodiments of the present invention provide a method for processing the pictures, including: decomposing a dynamic picture frame into multiple static picture frames; bonding each of the static picture frames with a static original picture to generate multiple static pictures; and forming a dynamic picture with the multiple static pictures. Embodiments of the present invention further provide a system for processing the pictures, including a decomposing unit, a bonding unit and a composing unit. The decomposing unit is configured to decompose a dynamic picture frame into multiple static picture frames; the bonding unit is configured to bond each of the static picture frames with a static original picture to generate multiple static pictures; and the composing unit is configured to form a dynamic picture with the multiple static pictures. By processing the pictures with the technical solution provided by embodiments of the present invention, pictures may possess a sense of action and good expressive force, and may better display the personality of the user. | 07-22-2010 |
20100188409 | INFORMATION PROCESSING APPARATUS, ANIMATION METHOD, AND PROGRAM - An information processing apparatus is provided which includes an input information recording unit for recording, when a movement stroke for an object is input, information on moving speed and movement stroke of an input tool used for inputting the movement stroke, and an object behaviour control unit for moving the object, based on the information on moving speed and movement stroke recorded by the input information recording unit, in such a way that the movement stroke of the input tool and a moving speed at each point in the movement stroke are replicated. | 07-29-2010 |
20100188410 | GRAPHIC ELEMENT WITH MULTIPLE VISUALIZATIONS IN A PROCESS ENVIRONMENT - Smart graphic elements are provided for use as portions or components of one or more graphic displays, which may be executed in a process plant to display information to users about the process plant environment, such as the current state of devices within the process plant. Each of the graphic elements is an executable object that includes a property or a variable that may be bound to an associated process entity, like a field device, and that includes multiple visualizations, each of which may be used to graphically depict the associated process entity on a user interface when the graphic element is executed as part of the graphic display. Any of the graphic element visualizations may be used in any particular graphic display and the same graphic display may use different ones of the visualizations at different times. The different visualizations associated with a graphic element make the graphic element more versatile, at they allow the same graphic element to be used in different displays using different graphical styles or norms. These visualizations also enable the same graphic element to be used in displays designed for different types of display devices, such as display devices having large display screens, standard computer screens and very small display screens, such as PDA and telephone display screens. | 07-29-2010 |
20100194761 | CONVERTING CHILDREN'S DRAWINGS INTO ANIMATED MOVIES - The present invention comprises of a business method and music and text-derived speech animation software for producing simple, effective animations of digital media content that educate, entertain the children and views by the presentation of speaking digital characters. The invention makes the creation of digital talking characters both easy and effective to produce. The completed animation is then provided to the children who made the drawings and optionally posted on a website accessible through the Internet or used for the creation of online Greeting Cards and Story books. | 08-05-2010 |
20100194762 | Standard Gestures - Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value. | 08-05-2010 |
20100201691 | SHADER-BASED FINITE STATE MACHINE FRAME DETECTION - Embodiments for shader-based finite state machine frame detection for implementing alternative graphical processing on an animation scenario are disclosed. In accordance with one embodiment, the embodiment includes assigning an identifier to each shader used to render animation scenarios. The embodiment also includes defining a finite state machine for a key frame in each of the animation scenarios, whereby each finite state machine representing a plurality of shaders that renders the key frame in each animation scenario. The embodiment further includes deriving a shader ID sequence for each finite state machine based on the identifier assigned to each shader. The embodiment additionally includes comparing an input shader ID sequence of a new frame of a new animation scenario to each derived shader ID sequences. Finally, the embodiment includes executing alternative graphics processing on the new animation scenario when the input shader ID sequence matches one of the derived shader ID sequences. | 08-12-2010 |
20100201692 | User Interface for Controlling Animation of an Object - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions. | 08-12-2010 |
20100207949 | ANIMATION EVENTS - A method of detecting an occurrence of an event of an event type during an animation, in which the animation comprises, for each of a plurality of object parts of an object, data defining the respective movement of that object part at each of a sequence of time-points for the animation, the method comprising: indicating the event type, wherein the event type specifies: one or more of the object parts; and a sequence of two or more event phases that occur during an event of that event type such that, for each event phase, the respective movements of the one or more specified object parts during that event phase are each constrained according to a constraint type associated with that event phase; and detecting an occurrence of an event of the event type by detecting a section of the animation during which the respective movements defined by the animation for the specified one or more object parts are constrained in accordance with the sequence of two or more event phases. | 08-19-2010 |
20100207950 | DEFINING SIMPLE AND COMPLEX ANIMATIONS - A unified user interface (“UI”) is provided that includes functionality for defining both simple and complex animations for an object. The unified UI includes a UI for defining a single animation for an object and a UI for defining a more complex animation. The UI for defining a single animation for an object includes a style gallery and an effects options gallery. The UI for defining two or more animations for a single object includes a style gallery for selecting two or more animation classes to be applied to an object, one or more user interface controls for specifying the timing and order of the two or more animations, and an on-object user interface (“OOUI”) displayed adjacent to each object for providing a visual indication of the two or more animations and for providing an indication when an animation includes two or more build steps. | 08-19-2010 |
20100207951 | METHOD AND DEVICE FOR MONITORING OPERATION OF A SOLAR THERMAL SYSTEM - A novel method for monitoring the operation of a solar thermal system such as the healthy home system or the like. The present device includes a hardware housing with a processor device coupled to a bus and one or more memory devices. The processor device can be coupled to one or more input devices wherein the one or more input devices are coupled to at least the solar array. The input devices can be coupled to the electric panel, the space heater, the water heater, as well as other components of the healthy home. The method includes a variety of steps such as establishing connection to associated hardware in the healthy home system, running diagnostic checks to determine system health, validating acquired data, and displaying the data through text display and graphical illustrations. The method also includes updating the system information according to a schedule scheme such as a polling scheme, interrupt scheme, or others. These and possibly other steps can provide an easy and cost effective means of monitoring a healthy home's system operation. | 08-19-2010 |
20100220100 | SIGNAGE DISPLAY SYSTEM AND PROCESS - The invention is a novel display system, for signage and the like. An apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or a series of static images, or changing image sequences, from a plurality of lines of sight. | 09-02-2010 |
20100238179 | Presentation of Personalized Weather Information by an Animated Presenter - A computer-implemented personalized weather presentation method. The method includes generating personalized weather information ( | 09-23-2010 |
20100238180 | APPARATUS AND METHOD FOR CREATING ANIMATION FROM WEB TEXT - An apparatus and method for creating animation from a web text are provided. The apparatus includes a script formatter for generating a domain format script from the web text using a domain format that corresponds to a type of the web text, an adaptation engine for generating animation contents using the generated domain format script, and a graphics engine reproducing the generated animation contents in the form of an animation. | 09-23-2010 |
20100238181 | Method And System For Animating Graphical User Interface Elements Via A Manufacturing/Process Control Portal Server - A method and system are disclosed for rendering animated graphics on a browser client based upon a stream of runtime data from a manufacturing/process control system. The graphics animation is based upon an animated graphic display object specification and runtime data from a portal server affecting an appearance trait of the animated graphic display object. The client browser receives an animated graphics description from the portal server specifying an animation behavior for an identified graphical display object. The client creates a data exchange connection between an animated display object, corresponding to the animated graphics description, and a source of runtime data from the portal server affecting display of the animated display object. Thereafter, the client applies runtime data received from the source of runtime data to the animated display object to render an animated graphic display object. | 09-23-2010 |
20100245365 | IMAGE GENERATION SYSTEM, IMAGE GENERATION METHOD, AND COMPUTER PROGRAM PRODUCT - An image generation system includes an operation information acquisition section that acquires operation information based on sensor information from a controller that includes a sensor, the operation information acquisition section acquiring rotation angle information about the controller around a given coordinate axis as the operation information, a hit calculation section that performs a hit calculation process, the hit calculation process setting at least one of a moving state and an action state of a hit target that has been hit by a hit object based on the rotation angle information that has been acquired by the operation information acquisition section, and an image generation section that generates an image based on the operation information. | 09-30-2010 |
20100259545 | SYSTEM AND METHOD FOR SIMPLIFYING THE CREATION AND STORAGE OF COMPLEX ANIMATION OBJECTS AND MOVIES - An animation generation system for online recording and editing of elaborated animation objects and movies, that comprises a plurality of elaborated animated objects with hinges for controlling the object's limb movement; a collection of associated actions with parameters which can be programmed by authorized animation developers, who may be registered web site members that communicate through messages which comprise the created animation movies and objects using the Internet infrastructure; a collection of actions of associated generic features and a collection of associated actions complex moves comprising ready made small actions; A database of elaborated animated objects and movies containing accumulated collection of animation movies and objects created; A user interface module for presenting objects' features, for allowing a user to choose animation objects or related actions, for inserting actions by dragging and dropping, for playing an edited scene and for recognizing the action pattern of a developer's input and identifying the animation object and suggesting a variety of possible actions. | 10-14-2010 |
20100259546 | MODELIZATION OF OBJECTS IN IMAGES - A system includes an aligner to align an initial position of an at least partially kinematically, parameterized model with an object in an image, and a modelizer to adjust parameters of the model to match the model to contours of the object, given the initial alignment. An animation system includes a modelizer to hierarchically match a hierarchically rigid model to an object in an image, and a cutter to cut said object from said image and to associate it with said model. A method for animation includes hierarchically matching a hierarchically rigid model to an object in an image, and cutting said object from said image to associate it with said model. | 10-14-2010 |
20100265258 | Flame Image Sequencing Apparatus and Method - An imaging apparatus is disclosed that is designed to display a set of images on a screen for a fire. The apparatus includes a display screen, a controller capable of displaying a set of images to the display screen, and a masking member to prevent a viewer from viewing a part of the display screen. | 10-21-2010 |
20100283787 | CREATION AND RENDERING OF HIERARCHICAL DIGITAL MULTIMEDIA DATA - The present invention relates to a method for the creation of large hierarchical computer graphics datasets. The method comprises combination ( | 11-11-2010 |
20100283788 | VISUALIZATION SYSTEM FOR A DOWNHOLE TOOL - Apparatus for visualizing a downhole tool in a subsurface environment. The apparatus comprising: an input for receiving data on at least one of the downhole tool and the subsurface environment, a physical model processing said input for generating a representation of the downhole tool moving through said subsurface environment and an output for displaying said downhole tool movement in real-time. | 11-11-2010 |
20100302252 | MULTIPLE PERSONALITY ARTICULATION FOR ANIMATED CHARACTERS - A method for a computer system includes determining a model for a first personality of a component of an object, wherein the model for the first personality of the component is associated with a component name and a first personality indicia, determining a model for a second personality of the component of the object, wherein the model for the second personality of the component is associated with the component name and the second personality indicia, determining a multiple personality model of the object, wherein the model of the object includes the model for the first personality of the component, the model of the second personality of the component, the first personality indicia, and the second personality indicia, and storing the multiple personality model of the object in a single file. | 12-02-2010 |
20100302253 | REAL TIME RETARGETING OF SKELETAL DATA TO GAME AVATAR - Techniques for generating an avatar model during the runtime of an application are herein disclosed. The avatar model can be generated from an image captured by a capture device. End-effectors can be positioned an inverse kinematics can be used to determine positions of other nodes in the avatar model. | 12-02-2010 |
20100302254 | ANIMATION SYSTEM AND METHODS FOR GENERATING ANIMATION BASED ON TEXT-BASED DATA AND USER INFORMATION - Animation devices and a method that may output text-based data as an animation, are provided. The device may be a terminal, such as a mobile phone, a computer, and the like. The animation device may extract one or more emotions corresponding to a result obtained by analyzing text-based data. The emotion may be based on user relationship information managed by a user of the device. The device may select an action corresponding to the emotion from a reference database, and combine the text-based data with the emotion and action to generate an animation script. The device may generate a graphic in which a character is moved based on the action information, the emotion information, and the text-based data. | 12-02-2010 |
20100302255 | METHOD AND SYSTEM FOR GENERATING A CONTEXTUAL SEGMENTATION CHALLENGE FOR AN AUTOMATED AGENT - Provided is a system and method for generating a contextual segmentation challenge that poses an identification challenge. The method including obtaining at least one ad element and obtaining a test element. The ad element and the test element then combined to provide a composite image. At least one noise characteristic is then applied to the composite image. The composite image is then animated as a plurality of views as a contextual segmentation challenge. A system for performing the method is also provided. | 12-02-2010 |
20100302256 | System and Method for Video Choreography - An electronic entertainment system for creating a video sequence by executing video game camera behavior based upon a video game sound file includes a memory configured to store an action event/camera behavior (AE/CB) database, game software such as an action generator module, and one or more sound files. In addition, the system includes a sound processing unit coupled to the memory for processing a selected sound file, and a processor coupled to the memory and the sound processing unit. The processor randomly selects an AE pointer and a CB pointer from the AE/CB database. Upon selection of the CB pointer and the AE pointer, the action generator executes camera behavior corresponding to the selected CB pointer to view an action event corresponding to the selected AE pointer. | 12-02-2010 |
20100309208 | Remote Control Electronic Display System - A remotely controlled electronic display sign which operates with a plasma display and which provides for humidity and heat control and the like allowing the sign to be used in various environments. The sign is essentially self-contained and includes those components necessary for enabling a display of desired material from a remote control source or one located at the sign. A controller in or associated with the sign is accessible either electrically, or through satellite transmission or other wireless transmission from the remote source which allows the display of the sign to be changed at will. Thus, an operator at a remote source may, with the aid of a pre-prepared graphic design, transmit that design to the controller at or associated with the sign for display of that graphic information and potentially with sound. | 12-09-2010 |
20100309209 | System and method for database driven action capture - There is provided a system and method for database driven action capture. By utilizing low cost, lightweight MEMS devices such as accelerometers, a user friendly, wearable, and cost effective system for motion capture is provided, which relies on a motion database of previously recorded motions to reconstruct the actions of a user. By relying on the motion database, calculation errors such as integration drift are avoided and the need for complex and expensive positional compensation hardware is avoided. The accelerometers may be implemented in an E-textile embodiment using inexpensive off-the-shelf components. In some embodiments, compression techniques may be used to accelerate linear best match searching against the motion database. Adjacent selected motions may also be blended together for improved reconstruction results and visual rendering quality. Various perceivable effects may be triggered in response to the reconstructed motion, such as animating a 3D avatar, playing sounds, or operating a motor. | 12-09-2010 |
20100315426 | SYSTEMS AND METHODS FOR INTEGRATING GRAPHIC ANIMATION TECHNOLOGIES IN FANTASY SPORTS CONTEST APPLICATIONS - Systems and methods for integrating graphic animation technologies with fantasy sports contest applications are provided. This invention enables a fantasy sports contest application to depict plays in various sporting events using graphic animation. The fantasy sports contest application may combine graphical representation of real-life elements such as, for example, player facial features, with default elements such as, for example, a generic player body, to create realistic graphic video. The fantasy sports contest application may provide links to animated videos for depicting plays on contest screens in which information associated with the plays may be displayed. The fantasy sports contest application may play the animated video for a user in response to the user selecting such a link. In some embodiment of the present invention, the fantasy sports contest application may also customize animated video based on user-supplied setup information. For example, the fantasy sports contest application may provide play information and other related data to allow a user to generate animated videos using the user's own graphics processing equipment and graphics animation program. | 12-16-2010 |
20100328318 | IMAGE DISPLAY DEVICE - An image display device is constructed by a display memory, a sprite attribute table, a sprite rendering processor and an animation execution engine. The display memory stores image data to be displayed on a display. The sprite attribute table stores attribute data representing a display attribute of a sprite which is a component of the image data. The sprite rendering processor executes a drawing process for reflecting image data of the sprite to the image data stored in the display memory according to the attribute data stored in the sprite attribute table. The animation execution engine reads an animation execution program including both attribute data to be transferred and a table write command of the attribute data from an external memory, and executes the animation execution program to transfer the attribute data to the sprite attribute table according to the table write command. | 12-30-2010 |
20110007077 | ANIMATED MESSAGING - A method performed by one or more devices includes receiving a user selection of a picture that contains an object of a character to be animated for an animated message and receiving one or more designations of areas within the picture to correspond to one or more human facial features for the character associated with the object. The method further includes receiving a textual message; receiving one or more user selections of one or more animation codes that identify animations to be performed by the one or more human facial features designated within the picture, and receiving an encoding of the textual message and the one or more animation codes. The method further includes generating the animated message based on the picture, the one or more designations of the one or more human facial features, and the one or more animation codes, and sending the animated message to a recipient. | 01-13-2011 |
20110007078 | Creating Animations - Animation creation is described, for example, to enable children to create, record and play back stories. In an embodiment, one or more children are able to create animation components such as characters and backgrounds using a multi-touch panel display together with an image capture device. For example, a graphical user interface is provided at the multi-touch panel display to enable the animation components to be edited. In an example, children narrate a story whilst manipulating animation components using the multi-touch display panel and the sound and visual display is recorded. In embodiments image analysis is carried out automatically and used to autonomously modify story components during a narration. In examples, various types of handheld view-finding frames are provided for use with the image capture device. In embodiments saved stories can be restored from memory and retold from any point with different manipulations and narration. | 01-13-2011 |
20110007079 | BRINGING A VISUAL REPRESENTATION TO LIFE VIA LEARNED INPUT FROM THE USER - Data captured with respect to a human may be analyzed and applied to a visual representation of a user such that the visual representation begins to reflect the behavioral characteristics of the user. For example, a system may have a capture device that captures data about the user in the physical space. The system may identify the user's characteristics, tendencies, voice patterns, behaviors, gestures, etc. Over time, the system may learn a user's tendencies and intelligently apply animations to the user's avatar such that the avatar behaves and responds in accordance with the identified behaviors of the user. The animations applied to the avatar may be animations selected from a library of pre-packaged animations, or the animations may be entered and recorded by the user into the avatar's avatar library. | 01-13-2011 |
20110007080 | SYSTEM AND METHOD FOR CONFORMING AN ANIMATED CAMERA TO AN EDITORIAL CUT - A method for conforming an animated camera to an editorial cut within a software application executing on a computer system. The method includes providing a shot that includes three-dimensional animation captured by a virtual camera associated with a pre-defined camera style; receiving an editorial action that has been performed to the shot; and updating a camera move associated with the virtual camera based on the camera style and the editorial action. | 01-13-2011 |
20110018880 | TIGHT INBETWEENING - A tool for inbetweening is provided, wherein inbetween frames are at least partly computer generated by analyzing elements of key frames to identify strokes, determining corresponding stroke pairs, computing a continuous stroke motion for each stroke pair, defined by a carrier defined by endpoints of the two strokes and, for mutual endpoints, adjusting the continuous stroke motion of the meeting strokes such that the adjustment results in the continuous stroke motion coinciding at the mutual endpoint such that the mutual endpoint would follow the same path and deforming the stroke as it is moved by the stroke motion, wherein the deformation is a weighted combination of deformations, each reconstructed using shape descriptors that are interpolated from the shape descriptors of the corresponding samples on the key frames, wherein the shape descriptors are computed from neighboring sample points in the cyclic order of samples along the stroke. | 01-27-2011 |
20110018881 | VARIABLE FRAME RATE RENDERING AND PROJECTION - In rendering a computer-generated animation sequence, pieces of animation corresponding to shots of the computer-generated animation sequence are obtained. Measurements of action in the shots are obtained. Frame rates, which can be different, for the shots are determined based on the determined measurements of action in the shots. The shots are rendered based on the determined frame rates for the shots. The rendered shots with frame rate information indicating the frame rates used in rendering the shots are stored. | 01-27-2011 |
20110037767 | VIDEO IN E-MAIL - To allow a video clip to be rendered within an e-mail, the video stream is converted into an animated image object (e.g. a GIF (Graphics Interchange Format) object) and stored on a server system. An HTML image element/tag ( ) is created that references the animated image object at the server, for embedding in a conventional HTML-encoded e-mail message. When the receiving e-mail application processes the HTML encoding, the processing of the HTML image element causes the referenced animated image object to be downloaded and displayed, thereby automatically presenting a recreation of the video stream. To facilitate efficient transmission to the receiving device, the size of the animated image object is preferably optimized before transmission, the optimization including general optimization techniques, as well as optimizations based on the particular characteristics associated with the receiving device and/or the communications link to the receiving device. | 02-17-2011 |
20110080410 | SYSTEM AND METHOD FOR MAKING EMOTION BASED DIGITAL STORYBOARD - A system and a method for generating a digital storyboard in which characters with various emotions are produced. The digital storyboard generating system includes an emotion-expressing character producing unit to produce an emotion-based emotion-expressing character, and a storyboard generating unit to generate storyboard data using the emotion-expressing character. Optionally, cartoon-rendering is performed on the storyboard data to generate an image, where the image is output to the user. | 04-07-2011 |
20110080411 | SIMULATED RESOLUTION OF STOPWATCH - There is described a mobile device comprising a display screen for displaying an image of a clock having a resolution of at least a first digit representing a tenth of a second and a second digit representing a hundredth of a second; and a processor having an internal clock, the processor adapted to update at least the first digit of the image of the clock on the display screen with true elapsed time, and to update the second digit with a non-true number. | 04-07-2011 |
20110080412 | DEVICE FOR DISPLAYING CUTTING SIMULATION, METHOD FOR DISPLAYING CUTTING SIMULATION, AND PROGRAM FOR DISPLAYING CUTTING SIMULATION - In order to reduce the amount of computation required for ray tracing and facilitate simulating of changes in workpiece shape even on an inexpensive, low-performance computer, a device for displaying a cutting simulation includes: a rendered workpiece image update section for updating by ray tracing a portion of a rendered workpiece image buffer and a rendered workpiece depth buffer, the portion being associated with a rendering region corresponding to a change in the shape of the workpiece; a rendered tool image creation section for rendering a tool image by ray tracing for the current tool rendering region; and an image transfer section for transferring a partial image of the previous tool rendering region and the current workpiece rendering region to be updated from the rendered workpiece image buffer to a display frame buffer as well as transferring the current tool rendering image to the display frame buffer. | 04-07-2011 |
20110084970 | SYSTEM AND METHOD FOR PREVENTING PINCHES AND TANGLES IN ANIMATED CLOTH - Systems and methods are disclosed for altering character body animations to improve subsequent cloth animations. In particular, based on a character body animation, an extra level of processing is performed, prior to the actual cloth simulation. The extra level of processing removes potential areas of pinching or tangling in input character body simulation data, ensuring that the output of the cloth simulation will be have reduced pinches and tangles. | 04-14-2011 |
20110090231 | ON-LINE ANIMATION METHOD AND ARRANGEMENT - The arrangement has a first computer arranged to be in data communication with a second computer. The arrangement has a device for receiving from a second computer a editable version of animation data sufficient for rendering visually simplified animation in the second computer. The editable version of animation data has at least one reference to additional data for the purpose of forming animation in the second computer, and forming in the first computer a renderable or rendered version of animation data by combining the editable version of animation data with the referenced additional data. | 04-21-2011 |
20110096076 | APPLICATION PROGRAM INTERFACE FOR ANIMATION - Many computer applications incorporate and support animation. Application performance may be enhanced by delegating animation management to an application program interface (animation API) for animation. Accordingly, an animation API for managing animation is disclosed herein. The animation API may be configured to sequentially interpolate values of animation variables defining animation movement of animation objects. The animation API may interpolate the values of the animation variables using animation transitions within animation storyboards. The animation API may be configured to determine durations of animation transitions based upon animation characteristics parameters (e.g., starting position, desiring ending position, starting velocity of an animation variable). Durations and start times of animation transitions may be determined based upon key frames. The animation API may be configured to resolve scheduling conflicts among one or more animation transitions. Also, the animation API may be configured to facilitate smooth animation while switching between animation transitions for an animation variable. | 04-28-2011 |
20110096077 | CONTROLLING ANIMATION FRAME RATE OF APPLICATIONS - Many computer applications incorporate and support animation (e.g., interactive user interfaces). Unfortunately, it may be challenging for computer applications and rendering systems to render animation frames at a smooth and consistent rate while conserving system resources. Accordingly, a technique for controlling animation rendering frame rate of an application is disclosed herein. An animation rendering update interval of an animation timer may be adjusted based upon a rendering system state (e.g., a rate of compositing visual layouts from animation frames) of a rendering system and/or an application state (e.g., a rate at which an application renders frames) of an application. Adjusting the animation rendering update interval allows the animation timer to adjust the frequency of performing rendering callback notifications (work requests to an application to render animation frames) to an application based upon rendering system performance and application performance. | 04-28-2011 |
20110096078 | SYSTEMS AND METHODS FOR PORTABLE ANIMATION RIGS - One embodiment of the present invention sets forth a technique for transporting both behavior and related geometric information for an animation asset between different animation environments. A common virtual machine specification with a specific instruction set architecture is defined for executing behavioral traits of the animation asset. Each target animation environment implements the instruction set architecture. Because each virtual machine runtime engine implements an identical instruction set architecture, animation behavior can identically reproduced over any arbitrary platform implementing the virtual machine runtime engine. Embodiments of the present invention beneficially enable reuse of animation assets without compatibility restrictions related to platform or application differences. | 04-28-2011 |
20110109634 | PORTABLE ELECTRONIC DEVICE AND METHOD OF INFORMATION RENDERING ON PORTABLE ELECTRONIC DEVICE - A portable electronic device-implemented method includes rendering information on a display of the portable electronic device, detecting receipt of an initiating input, and rendering a band, including at least one field, along an edge of the display. | 05-12-2011 |
20110109635 | Animations - At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer. | 05-12-2011 |
20110115798 | METHODS AND SYSTEMS FOR CREATING SPEECH-ENABLED AVATARS - Methods and systems for creating speech-enabled as avatars are provided in accordance with some embodiments, methods for creating speech-enabled avatars are provided, the method comprising; receiving a single image that includes a face with distinct facial geometry; comparing points on the distinct facial geometry with corresponding points on a prototype facial surface, wherein the prototype facial surface is modeled by a Hidden Markov Model that has facial motion parameters; deforming the prototype facial surface based at least in part on the comparison; in response to receiving a text input or an audio input, calculating the facial motion parameters based on a phone set corresponding to the received input; generating a plurality of facial animations based on the calculated facial motion parameters and the Hidden Markov Model; and generating an avatar from the single image that includes the deformed facial sin face, the plurality of facial animations, and the audio input or an audio waveform corresponding to the text input. | 05-19-2011 |
20110115799 | METHOD AND SYSTEM FOR ASSEMBLING ANIMATED MEDIA BASED ON KEYWORD AND STRING INPUT - One aspect of the invention is a method for automatically assembling an animation. According to this embodiment, the method includes accepting at least one input keyword relating to a subject for the animation and accessing a set of templates. In this embodiment, each template generates a different type of output, and each template includes components for display time, screen location, and animation parameters. The method also includes retrieving data from a plurality of websites or data collections using an electronic search based on the at least one input keyword and the templates, determining which retrieved data to assemble into the set of templates, coordinating assembly of data-populated templates to form the animation, and returning the animation for playback by a user. | 05-19-2011 |
20110141120 | APPLICATION PROGRAMMING INTERFACES FOR SYNCHRONIZATION - The application programming interface operates in an environment with user interface software interacting with multiple software applications or processes in order to synchronize animations associated with multiple views or windows of a display of a device. The method for synchronizing the animations includes setting attributes of views independently with each view being associated with a process. The method further includes transferring a synchronization call to synchronize animations for the multiple views of the display. In one embodiment, the synchronization call includes the identification and the number of processes that are requesting animation. The method further includes transferring a synchronization confirmation message when a synchronization flag is enabled. The method further includes updating the attributes of the views from a first state to a second state independently. The method further includes transferring a start animation call to draw the requested animations when both processes have updated attributes. | 06-16-2011 |
20110148885 | APPARATUS AND METHOD FOR EDITING ANIMATION DATA OF VIRTUAL OBJECT UTILIZING REAL MODEL - Disclosed are an apparatus and a method for editing animation data of a virtual object using a model. The animation data editing apparatus using the model according to the embodiment of the present invention allows motion information acquired by measuring a real model to be used by computer graphic software for animation or modeling so as to produce a computer graphic model corresponding to the real model into an animation by being adjusted and modified by a designer on the basis of measured motion information. | 06-23-2011 |
20110157188 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An apparatus comprises: a storage unit adapted to store information that associates information representing the key frame with information representing the display form of the object of the key frame; an assignment unit adapted to assign, to the time-points corresponding to the key frames, indicators with which a difference between the display forms of the objects is identifiable; and a control unit adapted to, when a new key frame is set at a time-point of interest on the time axis where no key frame is preset, and the same indicator as one of the indicators is assigned to the time-point of interest, cause the storage unit to store information representing the newly set key frame in association with information representing the display form of the object of each preset key frame, which is information representing the display form of the object at the time-point to which the same indicator is assigned. | 06-30-2011 |
20110164042 | Device, Method, and Graphical User Interface for Providing Digital Content Products - A multifunction device having a touch-sensitive surface displays graphical objects that represent digital content products, each graphic object having a front side image and a back side image. An initial display shows front side images of objects representing digital content products. A user input selects a graphical object, resulting in an animation that simultaneously flips the graphical object over and enlarges it. At the end of the animation, the back side is displayed, and is larger than the initial front side image. A second user input on a front side image of a second graphical object results in a second animation that simultaneously flips the first graphical object over and reduces its size, and also flips the second graphical object over and enlarges it. The front side image of the first graphical object and the back side image of the second graphical object are thereby concurrently displayed. | 07-07-2011 |
20110164043 | METHOD OF REMOTELY CONTROLLING A PRESENTATION TO FREEZE AN IMAGE USING A PORTABLE ELECTRONIC DEVICE - A system and method are set forth for remotely controlling a presentation from a portable electronic device so as to freeze a slide on a remote projector to permit searching for a desired slide on the portable electronic device and then continuing the presentation when searching is complete. In one embodiment, a switch is provided in a communication layer of a presentation application such that when the switch is turned off, communication is suspended between the portable electronic device and the projector, thereby permitting browsing on the portable electronic device without interrupting the presentation. When the switch is turned on the current slide information is transmitted from the portable electronic device to the projector. | 07-07-2011 |
20110164044 | Preparation method for the virtual reality of high fidelity sports and fitness equipment and interactive system and method based on the virtual reality - This invention is a preparation method for virtual reality of high fidelity sports and fitness equipment and an interactive system and method based on the virtual reality. Image content of real scene shooting and the control parameters of sports and fitness equipment correspond to real scene will be taken synchronously. The control parameters can let sports and fitness equipment, along with the change of real scene, adjust in real time and automatically its forward and backward leaning, left and right leaning, swinging angle, or adjust automatically its loading size; or follow the user's exercise speed to adjust the playing speed of the virtual reality; or follow the environmental parameters to adjust the loading size; or follow the staring direction of the eye of the user or the face direction of the user or the swinging angle of sports and fitness equipment to change visual field scope image of real scene; it can prepare diversified virtual reality digital content, it can reach high virtual level according to different exercise characteristic; the user won't be limited by the site and weather, and the application field becomes wider. | 07-07-2011 |
20110175918 | CHARACTER ANIMATION CONTROL INTERFACE USING MOTION CAPURE - A processor-readable medium stores code representing instructions to cause a processor to define a virtual feature. The virtual feature can be associated with at least one engaging condition. The code further represents instructions to cause the processor to receive an end-effector coordinate associated with an actor and calculate an actor intention based at least in part on a comparison between the at least one engaging condition and the end-effector coordinate. | 07-21-2011 |
20110175919 | SIGNAGE DISPLAY SYSTEM AND PROCESS - The invention is a novel display system, for signage and the like. An apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or a series of static images, or changing image sequences, from a plurality of lines of sight. | 07-21-2011 |
20110175920 | METHOD FOR HANDLING AND TRANSFERRING DATA IN AN INTERACTIVE INPUT SYSTEM, AND INTERACTIVE INPUT SYSTEM EXECUTING THE METHOD - A method in a computing device of transferring data to another computing device includes establishing wireless communication with the other computing device, designating data for transfer to the other computing device; and in the event that the computing device assumes a predetermined orientation, automatically initiating wireless transfer of the data to the other computing device. A system implementing the method is provided. A method of handling a graphic object in an interactive input system having a first display device includes defining a graphic object placement region for the first display device that comprises at least a visible display region of the first display device and an invisible auxiliary region between the visible display region and an outside edge of the first display device; and in the event that the graphic object enters the invisible auxiliary region, automatically moving the graphic object through the invisible auxiliary region until at least a portion of the graphic object enters a visible display region of a second display device of a second interactive input system. A system implementing the method, and other related systems and methods, are provided. | 07-21-2011 |
20110175921 | PERFORMANCE DRIVEN FACIAL ANIMATION - A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation. | 07-21-2011 |
20110175922 | METHOD FOR DEFINING ANIMATION PARAMETERS FOR AN ANIMATION DEFINITION INTERFACE - A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model. | 07-21-2011 |
20110181601 | CAPTURING VIEWS AND MOVEMENTS OF ACTORS PERFORMING WITHIN GENERATED SCENES - Generating scenes for virtual environment of a visual entertainment program, comprising: capturing views and movements of an actor performing within the generated scenes, comprising: tracking movements of a headset camera and a plurality of motion capture markers worn by the actor within a physical volume of space; translating the movements of the headset camera into head movements of a virtual character operating within the virtual environment; translating the movements of the plurality of motion capture markers into body movements of the virtual character; generating first person point-of-view shots using the head and body movements of the virtual character; and providing the generated first person point-of-view shots to the headset camera worn by the actor. | 07-28-2011 |
20110181602 | USER INTERFACE FOR AN APPLICATION - A user interface is provided for interacting with slides and objects provided on slides. In certain embodiments, the user interface includes features that are displayed attached to or proximate to selected slides or objects. In embodiments, aspects of the user interface may be used to preview, review, add, or modify transitions associated with animation from one slide to the next (or previous) and builds associated with animation of objects on slides. | 07-28-2011 |
20110181603 | ELECTRONIC READER DEVICE AND GRAPHICAL USER INTERFACE CONTROL METHOD THEREOF - An electronic reader device with a physical control disposed on a surface of the device housing. The physical control is operable to initiate a first function. A display disposed on the surface of the housing is operable to show a virtual control that initiates a second function. A sensor detects a drag operation moving the virtual control to a position on a border of the display adjacent to the physical control. A processor associates the second function with the physical control in response to the drag operation and performs the second function upon activation of the physical control. | 07-28-2011 |
20110181604 | METHOD AND APPARATUS FOR CREATING ANIMATION MESSAGE - A method for creating an animation message includes generating input information containing information regarding input time and input coordinates according to input order of drawing information input through a touch screen; dividing an image containing the drawing information and background information into a plurality of blocks; creating an animation message by mapping the input information to the plurality of blocks so that the drawing information can be sequentially reproduced according to the input order; allocating a parity bit per pre-set block range of the animation message in order to detect an error occurring in the animation message; and transmitting the created animation message. | 07-28-2011 |
20110181605 | SYSTEM AND METHOD OF CUSTOMIZING ANIMATED ENTITIES FOR USE IN A MULTIMEDIA COMMUNICATION APPLICATION - In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient. | 07-28-2011 |
20110187723 | TRANSITIONING BETWEEN TOP-DOWN MAPS AND LOCAL NAVIGATION OF RECONSTRUCTED 3-D SCENES - Technologies are described herein for transitioning between a top-down map of a reconstructed structure within a 3-D scene and an associated local-navigation display. An application transitions between the top-down map and the local-navigation display by animating a view in a display window over a period of time while interpolating camera parameters from values representing a starting camera view to values representing an ending camera view. In one embodiment, the starting camera view is the top-down map view and the ending camera view is the camera view associated with a target photograph. In another embodiment, the starting camera view is the camera view associated with a currently-viewed photograph in the local-navigation display and the ending camera view is the top-down map. | 08-04-2011 |
20110187724 | MOBILE TERMINAL AND INFORMATION DISPLAY METHOD - A mobile terminal includes a display unit to display information processed by the mobile terminal, the display unit comprising a touch panel; and a control unit to control the display to display first information in a first direction if a first drag is detected and to display second information in a second direction different from the first direction while the first information is displayed if a second drag is detected. An information display method for a mobile terminal includes displaying first information in a first direction in response to a first drag; and displaying second information in a second direction different from the first direction in response to a second drag while the first information is displayed. | 08-04-2011 |
20110187725 | COMMUNICATION CONTROL DEVICE, COMMUNICATION CONTROL METHOD, AND PROGRAM - There is provided a communication control device including: a data storage unit storing feature data representing features of appearances of one or more communication devices; an environment map building unit for building an environment map representing positions of communication devices present in a real space based on an input image obtained by imaging the real space and the feature data stored in the data storage unit; a detecting unit for detecting a user input toward a first communication device designating any data provided in the first communication device and a direction; a selecting unit for selecting a second communication device serving as a transmission destination of the designated data from the environment map based on the direction designated by the user input; and a communication control unit for transmitting the data provided in the first communication device from the first communication device to the second communication device. | 08-04-2011 |
20110187726 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including: a data storage unit having feature data stored therein, the feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space using an imaging device and the feature data stored in the data storage unit, the environment map representing a position of a physical object present in the real space; an information generating unit for generating animation data for displaying a status of communication via a communication interface on a screen, using the environment map built by the environment map building unit; and an image superimposing unit for generating an output image by superimposing an animation according to the animation data generated by the information generating unit on the input image. | 08-04-2011 |
20110187727 | APPARATUS AND METHOD FOR DISPLAYING A LOCK SCREEN OF A TERMINAL EQUIPPED WITH A TOUCH SCREEN - An apparatus and method for displaying a lock screen including a character object having a motion effect in a terminal equipped with a touch screen. The method includes locking the touch screen and displaying the lock screen including the character object having the motion effect on a preset background image. Upon generation of a touch input, determining whether the touch input is for unlocking the touch screen, and if the touch input is for unlocking the touch screen, unlocking the touch screen and controlling the character object to perform a preset action indicating the unlocking of the touch screen. | 08-04-2011 |
20110205233 | Constraint-Based Ordering for Temporal Coherence of Stroke Based Animation - A renderer allows for a flexible and temporally coherent ordering of strokes in the context of stroke-based animation. The relative order of the strokes is specified by the artist or inferred from geometric properties of the scene, such as occlusion, for each frame of a sequence, as a set of stroke pair-wise constraints. Using the received constraints, the strokes are partially ordered for each of the frames. Based on these partial orderings, for each frame, a permutation of the strokes is selected amongst the ones consistent with the frame's partial order, so as to globally improve the perceived temporal coherence of the animation. The sequence of frames can then, for instance, be rendered by ordering the strokes according to the selected set of permutations for the sequence of frames. | 08-25-2011 |
20110216074 | REORIENTING PROPERTIES IN HAIR DYNAMICS - Techniques are disclosed for orienting (or reorienting) properties of computer-generated models, such as those associated with dynamic models or simulation models. Properties (e.g., material or physical properties) that influence the behavior of a dynamic or simulation model (e.g., a complex curve model representing a curly hair) may be oriented or re-oriented as desired using readily available reference frames. These references frame may be obtained using a proxy model that corresponds to the dynamic or simulation model in a less computationally expensive manner in some embodiments than some techniques for determining reference frames directly using the dynamic or simulation model. In some embodiments, the proxy model may include a smoothed version of the dynamic or simulation model. In other embodiments, the proxy model may include a filtered or simplified version of the dynamic or simulation model. | 09-08-2011 |
20110216075 | INFORMATION PROCESSING APPARATUS AND METHOD, AND PROGRAM - An information processing apparatus includes a detection unit configured to detect a gesture made by a user, a recognition unit configured to recognize a type of the gesture detected by the detection unit, a control unit configured to control operation of a first application and a second application, and an output unit configured to output information of the first application or the second application. If the gesture is recognized by the recognition unit while the control unit is controlling the operation of the first application in the foreground, the control unit controls the operation of the second application operating in the background of the first application on the basis of the type of the gesture recognized by the recognition unit. | 09-08-2011 |
20110216076 | APPARATUS AND METHOD FOR PROVIDING ANIMATION EFFECT IN PORTABLE TERMINAL - An apparatus and method for providing a highly-realistic animation effect in a portable terminal. An apparatus for providing an animation effect in a portable terminal includes an animation processing unit increasing the realism of an animation by performing a composite animation scheme that continuously processes a key frame animation, which is represented in a fixed pattern, and a physical animation that is realistically represented according to peripheral environments. The apparatus also includes a display unit displaying an animation played by the animation processing unit | 09-08-2011 |
20110227929 | STATELESS ANIMATION, SUCH AS BOUNCE EASING - An animation system is described herein that uses a transfer function on the progress of an animation that realistically simulates a bounce behavior. The transfer function maps normalized time and allows a user to specify both a number of bounces and a bounciness factor. Given a normalized time input, the animation system maps the time input onto a unit space where a single unit is the duration of the first bounce. In this coordinate space, the system can find the corresponding bounce and compute the start unit and end unit of this bounce. The system projects the start and end units back onto a normalized time scale and fits these points to a quadratic curve. The quadratic curve can be directly evaluated at the normalized time input to produce a particular output. | 09-22-2011 |
20110227930 | FARMINIZER SOFTWARE - Method for the transfer of project data of a multidimensional animation from a first computer to a second computer which are connected via a network and has at least one rendering device (render farm), the required information of which is determined by a set of digital rules, comprising the following steps:
| 09-22-2011 |
20110227931 | METHOD AND APPARATUS FOR CHANGING LIP SHAPE AND OBTAINING LIP ANIMATION IN VOICE-DRIVEN ANIMATION - The present invention discloses a method and apparatus for changing lip shape and obtaining a lip animation in a voice-driven animation, and relate to computer technologies. The method for changing lip shape includes: obtaining audio signals and obtaining motion extent proportion of lip shape according to characteristics of the audio signals; obtaining an original lip shape model inputted by a user and generating a motion extent value of the lip shape according to the original lip shape model and the obtained motion extent proportion of the lip shape; generating a lip shape grid model set according to the obtained motion extent value of the lip shape and a preconfigured lip pronunciation model library. The method for changing lip shape in a voice-driven animation includes an obtaining module, a first generating module and a second generating module. The solutions provided by the present invention have a simple algorithm and low cost. | 09-22-2011 |
20110227932 | Method and Apparatus for Generating Video Animation - The examples of the present invention provide a method and apparatus for generating a video animation, and the method and apparatus relate to the animation field. The method includes: receiving a command sent by a user, determining an action corresponding to the command according to the command, and determining the total number of frames corresponding to the action and a motion coefficient of each frame; calculating an offset of each control point in each frame according to the motion coefficient of each frame, and generating a video animation according to the offset of each control point in each frame and the total number of frames. An apparatus for generating a video animation is also provided. | 09-22-2011 |
20110261060 | DRAWING METHOD AND COMPUTER PROGRAM - The present invention provides an easy method for creating animations from drawings. A user utilizes a user interface of an electronic media to draw a first line, then to go back in the recording's timeline, and then to draw a second line, such that a playback of the recording shows at least some portion of the first and second lines being drawn simultaneously. This allows a user to easily create animations from drawings for the purpose of visualization, art, entertainment, or encoding of synchronized motion. The invention allows for various ways in which the computer can receive a user's drawing events, in which drawing events are associated with timelines to create a recording, in which drawing events are displayed, and in which the playback of drawing events is saved. | 10-27-2011 |
20110267356 | ANIMATING A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of animating a virtual object within a virtual world, wherein the virtual object comprises a plurality of object parts, the method comprising: at an animation update step: specifying a target frame in the virtual world; and applying control to a first object part, wherein the control is arranged such that the application of the control in isolation to the first object part would cause a movement of the first object part in the virtual world that (a) reduces a difference between a control frame and the target frame, the control frame being a frame at a specified position and orientation in the virtual world relative to the first object part and (b) has a substantially non-zero component along at most one or more degrees of freedom identified for the first object part. | 11-03-2011 |
20110267357 | ANIMATING A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of animating a virtual object within a virtual world, the method comprising applying a weighted combination of task-space inverse-dynamics control and joint-space inverse-dynamics control to the virtual object. | 11-03-2011 |
20110273455 | Systems and Methods of Rendering a Textual Animation - Systems and methods of rendering a textual animation are provided. The methods include receiving an audio sample of an audio signal that is being rendered by a media rendering source. The methods also include receiving one or more descriptors for the audio signal based on at least one of a semantic vector, an audio vector, and an emotion vector. Based on the one or more descriptors, a client device may render the textual transcriptions of vocal elements of the audio signal in an animated manner. The client device may further render the textual transcriptions of the vocal elements of the audio signal to be substantially in synchrony to the audio signal being rendered by the media rendering source. In addition, the client device may further receive an identification of a song corresponding to the audio sample, and may render lyrics of the song in an animated manner. | 11-10-2011 |
20110273456 | SYSTEM AND METHOD FOR PARTIAL SIMULATION AND DYNAMIC CONTROL OF SIZES OF ANIMATED OBJECTS - Systems and methods are provided for altering a portion of a simulation without deleteriously altering adjoining portions, and in so doing increasing the pace at which simulations may be made by decreasing the overall number and size of simulations required. In other implementations, the systems and method provide convenient ways to dynamically control the size of animated objects, such as hair or cloth, using animated rest poses. | 11-10-2011 |
20110279461 | SPAWNING PROJECTED AVATARS IN A VIRTUAL UNIVERSE - The present invention provides a computer implemented method and apparatus to project a projected avatar associated with an avatar in a virtual universe. A computer receives a command to project the avatar, the command having a projection point. The computer transmits a request to place a projected avatar at the projection point to a virtual universe host. The computer renders a tab associated with the projected avatar. | 11-17-2011 |
20110285727 | ANIMATION TRANSITION ENGINE - A method that facilitates smoothly animating content of a graphical user interface includes acts of receiving a description of a first virtual scene and receiving a description of a second virtual scene. The method also includes an act of causing an animated transition to be displayed on a display screen of a computing device between the first virtual scene and the second virtual scene at a graphical object level based at least in part upon the description of the first virtual scene and the description of the second virtual scene, wherein the animated transition at the graphical object level is an animated change of a graphical object between the first virtual scene and the second virtual scene. | 11-24-2011 |
20110292053 | PLACEMENT OF ANIMATED ELEMENTS USING VECTOR FIELDS - The placement of one animated element in a virtualized three-dimensional environment can be accomplished with reference to a second animated element and a vector field derived from the relationship thereof. If the first animated element is “inside” the second animated element after the second one was moved to a new animation frame, an existing vector field can be calculated for the region where it is “inside”. The vector field can comprise vectors that can have a direction and magnitude commensurate with the initial velocity and direction required to move the first animated element back outside of the second one. Movement of the first animated element can then be simulated in accordance with the vector field and afterwards a determination can be made whether any portion still remains inside. Such an iterative process can move and place the first animation element prior to the next move of the second animation element. | 12-01-2011 |
20110292054 | System and Method for Low Bandwidth Image Transmission - An image transmission method (and related system) for obtaining data of a local subject and processing the data of the local subject to fit a local model of at least a region of the local subject and extract parameters of the local model to capture features of the region of the local subject. The method (and related system) may also include obtaining data of at least one remote subject and processing the data of the remote subject to fit at least one of at least one region of the remote subject and extract parameters the remote model to capture features of the region of the remote subject. The method (and related system) may also include transmitting the extracted parameters of the local region to a remote processor and reconstructing the local image based on the extracted parameters of the local region and the extracted parameters of the remote region. | 12-01-2011 |
20110292055 | SYSTEMS AND METHODS FOR ANIMATING NON-HUMANOID CHARACTERS WITH HUMAN MOTION DATA - Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described. | 12-01-2011 |
20110298808 | Animated Vehicle Attendance Systems - In one embodiment, an animated vehicle attendant system may include: a communication path positioned within a vehicle; an avatar creation interface positioned within a passenger compartment of the vehicle and communicatively coupled to the communication path; a first display positioned within the passenger compartment of the vehicle and communicatively coupled to the communication path, wherein the first display includes a first processor and a first memory; a second display positioned within the passenger compartment of the vehicle and communicatively coupled to the communication path, wherein the second display includes a second processor and a second memory; and an animated avatar including one or more alterable visual characteristics. The animated avatar is stored in the first memory and/or the second memory. The first processor and/or the second processor executes machine readable instructions to: receive input from the avatar creation interface; update the one or more alterable visual characteristics based upon the input from the avatar creation interface; and present the animated avatar on the first display and/or the second display. | 12-08-2011 |
20110298809 | ANIMATION EDITING DEVICE, ANIMATION PLAYBACK DEVICE AND ANIMATION EDITING METHOD - An animation editing device includes animation data including time line data that defines frames on the basis of a time line showing temporal display order of the frames, and space line data that defines frames on the basis of a space line for showing a relative positional relationship between a display position of each of animation parts and a reference position shown by a tag by mapping the relative positional relationship onto a one-dimensional straight line, displays the time line and the space line, and the contents of the frames based on the time line and the space line, and accepts an editing command to perform an editing process according to the inputted editing command. | 12-08-2011 |
20110304629 | REAL-TIME ANIMATION OF FACIAL EXPRESSIONS - Animation of a character, such as a video game avatar, to reflect facial expressions of an individual in real-time is described herein. An image sensor is configured to generate a video stream, wherein frames of the video stream include a face of an individual. Facial recognition software is utilized to extract data from the video stream that is indicative of facial expressions of the individual. A three-dimensional rig is driven based at least in part upon the data that is indicative of facial expressions of the individual, and an avatar is animated to reflect the facial expressions of the user in real-time based at least in part upon the three-dimensional rig. | 12-15-2011 |
20110304630 | REAL-TIME TERRAIN ANIMATION SELECTION - In-game characters select the proper animation to use depending on the state of the terrain on which they are currently moving. In this specific case the character chooses an animation depending on the angle of the ground on which it is walking. The method involves real-time determination of the ground angle which is then used to choose the most desirable animation from a closed set of pre-created animations. The animation set consists of animations rendered with the character moving on flat terrain, as well as animations rendered of the character moving uphill and downhill (separately) at pre-determined angles. In this game an animation set consisted of the following animations: 0 degrees, 15 degrees uphill, 30 degrees uphill, 45 degrees uphill, 15 degrees downhill, 30 degrees downhill, 45 degrees downhill). Drawing of the animation is offset to give the best appearance relative to the ground angle. | 12-15-2011 |
20110304631 | METHODS AND APPARATUSES FOR PROVIDING A HARDWARE ACCELERATED WEB ENGINE - Methods of expressing animation in a data stream are disclosed. In one embodiment, a method of expressing animation in a data stream includes defining animation states in the data stream with each state having at least one property such that properties are animated as a group. The animation states that are defined in the data stream may be expressed as an extension of a styling sheet language. The data stream may include web content and the defined animation states. | 12-15-2011 |
20110310104 | DIGITAL COMIC BOOK FRAME TRANSITION METHOD - A method is provided in which, during a first period of time, first data relating to a first frame of a digital comic book are displayed via an electronic display device. The displayed first data includes a first content element. During a second period of time, second data relating to a second frame of the digital comic book is displayed via the electronic display device. The displayed second data also includes the first content element. A frame transition effect is displayed during a third period of time intermediate the first period of time and the second period of time. During the third period of time, an animation sequence is displayed depicting translation of the first content element between a location of the first content element in the displayed first data and a location of the first content element in the displayed second data. In particular, the animation sequence is displayed superimposed on the frame transition effect. | 12-22-2011 |
20110316858 | Apparatuses and Methods for Real Time Widget Interactions - An electronic interaction apparatus is provided with a touch screen and a processing unit. The processing unit executes a first widget and a second widget, wherein the first widget generates an animation on the touch screen and modifies the animation in response to an operating status change of the second widget. | 12-29-2011 |
20110316859 | APPARATUS AND METHOD FOR DISPLAYING IMAGES - Apparatus and method for displaying images are provided. The apparatus is configured to cause a display of an image; detect one or more inputs by one or more input objects; determine coordinates of the one or more inputs in respect to the image; determine one or more property of the input; cause production of an animation with the image, the animation relating to the determined coordinates and being configured on the basis of one or more detected properties of the one or more inputs. | 12-29-2011 |
20110316860 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - [Problems] An appropriate motion expression in which a processing load on image processing for a motion of a character is reduced and a predetermined site of the character is in contact with a contact allowed object appropriately is carried out. | 12-29-2011 |
20120001923 | SOUND-ENHANCED EBOOK WITH SOUND EVENTS TRIGGERED BY READER PROGRESS - A sound-enhanced ebook is disclosed, the sound being presented to a reader of the ebook in accordance with the reader's progress through the ebook. The sound-enhanced ebook includes text information, and a plurality of sound events, each sound event being played in response to a reader's progress through particular text information associated with the sound event. Also disclosed is an ebook presenter for presenting text and coordinated sound events of a sound-enhanced ebook to a reader, the sound events being presented as the reader progresses through particular text of the ebook. The ebook presenter includes a text presentation module, a reader progress module, and a sound event presentation module, each sound event being associated with particular text information of the ebook, and each sound event being presentable in response to the reader's progress through the text information of the ebook as estimated by the reader progress module. | 01-05-2012 |
20120001924 | Embedding Animation in Electronic Mail, Text Messages and Websites - Provided are techniques for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera. The first photograph is called the “naked photo.” Using a graphics program, photos subsequent to the naked photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the naked photo as layers. The modified naked photo, including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the naked photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible. | 01-05-2012 |
20120007869 | Distributed physics based training system and methods - A distributed simulation system is composed of simulator stations linked over a network that each renders real-time video imagery for its user from scene data stored in its data storage. The simulation stations are each connected with a physics farm that manages the virtual objects in the shared virtual environment based on their physical attribute data using physics engines, including an engine at each simulation station. The physics engines of the physics farm are assigned virtual objects so as to reduce the effects of latency, to ensure fair fight requirements of the system, and, where the simulation is of a vehicle, to accurately model the ownship of the user at the station. A synchronizer system is also provided that allows for action of simulated entities relying on localized closed loop controls to cause the entities to meet specific goal points at specified system time points. | 01-12-2012 |
20120007870 | METHOD OF CHANGING PROCESSOR-GENERATED INDIVIDUAL CHARACTERIZATIONS PRESENTED ON MULTIPLE INTERACTING PROCESSOR-CONTROLLED OBJECTS - Processor-controlled objects, such as inter-communicating processor-controlled blocks, are adapted to present changeable individual characterizations to a user. A user manipulating the objects can cause, over time, a designated object to inherit characterizations and properties from other interacting objects to permit scalability in a set of such objects. The communication of individual characterization between interacting objects allows generation of sensory responses (in a response generator of a specific object or otherwise in a response generator associated with at least one other similar objects) based on proximity, relative position and the individual characterization presented on and by those interacting objects at the time of interaction. In this way, a set of objects has vastly extended interactive capabilities since each object is capable of dynamically taking on different characterizations arising from a meaningful combination of properties from different conjoined objects. | 01-12-2012 |
20120013620 | Animating Speech Of An Avatar Representing A Participant In A Mobile Communications With Background Media - Animating speech of an avatar representing a participant in a mobile communication including preparing the avatar for display for display including: selecting images to represent the participant, selecting a generic animation template having a mouth, fitting the images with the generic animation template, and texture wrapping the one or more images representing the participant over the generic animation template; selecting background media; displaying images texture wrapped over the generic animation template with the background media; and animating the images including: receiving an audio speech signal, identifying a series of phonemes, and for each phoneme: identifying a next mouth position, altering the mouth position, texture wrapping a portion of the images corresponding to the altered mouth position, displaying the texture wrapped portion and playing, synchronously with the displayed texture wrapped portion, the portion of the audio speech signal represented by the phoneme. | 01-19-2012 |
20120013621 | System and Method for Facilitating the Creation of Animated Presentations - The system and method of creating animated presentations of the present invention focuses largely on the ability for web users with little training to easily create and share animated presentations with other users on the web in addition to allowing experienced artists to share and gain recognition for their works. The system according to the present invention further makes use of manipulable puppets that permit adjustment at several joints in order to facilitate the illusion of movement. The user can very simply adjust the puppet in each frame to their liking and then the system combines the frames into an animated presentation. The user is further able to use other tools available in the animation creator to, for example, adjust the background of the animation, edit the facial expression of the puppet, add text, and/or other shapes to the animation in order to create a unique animated presentation. | 01-19-2012 |
20120019540 | Sliding Motion To Change Computer Keys - The subject matter of this specification can be implemented in, among other things, a computer-implemented touch screen user interface method that includes displaying a plurality of keys of a virtual keyboard on a touch screen computer interface, wherein the keys each include initial labels and a first key has multi-modal input capability that include a first mode in which the key is tapped and a second mode in which the key is slid across. The method further includes identifying an occurrence of sliding motion in a first direction by a user on the touch screen and over the first key. The method further includes determining modified key labels for at least some of the plurality of keys. The method further includes displaying the plurality of keys with the modified labels in response to identifying the occurrence of sliding motion on the touch screen and over the first key. | 01-26-2012 |
20120026172 | COLLISION FREE CONSTRUCTION OF ANIMATED FEATHERS - To generate a skin-attached element on a skin surface of an animated character, a region of the skin surface within a predetermined distance from a skin-attached element root position is deformed to form a lofted skin according to one of a plurality of constraint surfaces, where each of the plurality of constraint surfaces does not intersect with each other. A sublamina mesh surface constrained to the lofted skin is created. A two-dimensional version of the skin-attached element is projected onto the sublamina mesh surface. The lofted skin is reverted back to a state of the skin surface prior to the deformation of the region of the skin surface. | 02-02-2012 |
20120026173 | Transitioning Between Different Views of a Diagram of a System - Presenting different views of a system based on input from a user. A first view of a first portion of the system may be displayed. For example, the first portion may be a device of the system. User input specifying a first gesture may be received. In response to the first gesture, a second view of the first portion of the system may be displayed. For example, the first view may represent a first level of abstraction of the portion of the system and the second view may represent a second level of abstraction of the portion of the system. A second gesture may be used to view a view of a different portion of the system. Additionally, when changing from a first view to a second view, the first view may “morph” into the second view. | 02-02-2012 |
20120026174 | Method and Apparatus for Character Animation - The present invention provides various means for the animation of character expression in coordination with an audio sound track. The animator selects or creates characters and expressive characteristic from a menu, and then enters the characteristics, including lip and mouth morphology, in coordination with a running sound track. | 02-02-2012 |
20120044250 | Systems, Methods, and Machine-Readable Storage Media for Presenting Animations Overlying Multimedia Files - Provided are systems, methods, and machine-readable storage media for presenting animations overlying multimedia files in accordance with the present disclosure. Embodiments are described for linking an animation to a multimedia file and presenting the animation overlying a concurrent playback of the multimedia file (e.g., its content). Embodiments are described for including additional elements to the presentation of the animation outside of the playback of the animation, including residual elements that relate to the content of the animation and/or allow a user to receive further information about the content of the animation. Embodiments are described for linking an animation to more than one multimedia file. | 02-23-2012 |
20120056889 | ALTERNATE SOURCE FOR CONTROLLING AN ANIMATION - Techniques and tools described herein provide effective ways to program a property of a target object to vary depending on a source. For example, for a key frame animation for a property of a target UI element, an alternate time source is set to a property of a source UI element. When the target UI element is rendered at runtime, the animation changes the target value depending on the value of the property of the source UI element. Features of UI elements and animations can be specified in markup language. The alternate time source can be specified through a call to a programming interface. Animations for multiple target UI elements can have the same source, in which case different parameters for the respective animations can be used to adjust source values in different ways. | 03-08-2012 |
20120069028 | REAL-TIME ANIMATIONS OF EMOTICONS USING FACIAL RECOGNITION DURING A VIDEO CHAT - Embodiments are directed towards displaying an animated video emoticon by augmenting features identified in a video stream. Augmenting features identified in the video stream may include modifying, in whole or in part, some aspects of the identified features but not other aspects. For example, a user may select an animated video emoticon indicating surprise. Surprise may be conveyed by detecting the location of the user's eyes in the video stream, enlarging a size aspect of the eyes so as to appear ‘wide-eyed’, but leaving other aspects such as color and shape unchanged. Then, the location and/or orientation of the eyes in the video stream are tracked, and the augmentation is applied to the eyes at each tracked location and/or orientation. In another embodiment, identified features may be removed from the video stream and replaced with images, graphics, video, and the like. | 03-22-2012 |
20120075311 | IMAGE FORMING APPARATUS FOR DISPLAYING INFORMATION ON SCREEN - An image forming apparatus includes an operation panel serving as a display apparatus and an input apparatus for accepting a request for performing processing and a control unit for controlling display on the operation panel. The control unit performs determination processing for determining whether a time period required for the processing requested to be performed is predictable or not, provides animation display by continuously displaying two or more windows relating to the processing when it is determined that the time period required for the processing is predictable, and provides pop-up display by displaying one window relating to the processing when it is determined that the time period required for the processing is not predictable. | 03-29-2012 |
20120075312 | Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment. | 03-29-2012 |
20120092346 | GROUPING ITEMS IN A FOLDER - User interface changes and file system operations related to grouping items in a destination folder are disclosed. A user can group multiple items displayed on a user interface into a destination folder using an input command. An animation can be presented in the user interface illustrating the creation of the destination folder and the movement of each selected item into the newly created folder. The movement of each selected item can be along a respective path starting from an initial location on the user interface and terminating at the destination folder, and initiation of the movement of each selected item can be asynchronous with respect to the other selected items. Implementations showing the animations in various types of user interfaces are also disclosed. | 04-19-2012 |
20120092347 | ELECTRONIC DEVICE AND METHOD FOR DISPLAYING WEATHER INFORMATION THEREON - An electronic device and method displays weather information by different location images processed using image effects. A location of the electronic device is detected then the electronic device receives weather information of the location from a server. Upon detecting that the weather information, the electronic device reads the image effects of the images from a storage unit. After the electronic device reads the images from the server according to the location information. The images processed using the image effects are then displayed on a display unit of the electronic device. | 04-19-2012 |
20120098836 | METHOD AND APPARATUS FOR TURNING PAGES IN E-BOOK READER - A method turns pages in an electronic book (e-book) reader. The method includes displaying an e-book as left/right pages and, if a page turn signal is generated, turning a left page or a right page while displaying an action of turning a left page to the right or a right page to the left as if turning pages of a paper book. | 04-26-2012 |
20120098837 | APPARATUS FOR AUGMENTING A HANDHELD DEVICE - Apparatus and system that enables a handheld multimedia device (e.g. an mp3 player, such as the Apple™ iPod™ Touch, or a mobile telephone, such as the Apple™ iPhone™, or any other device that includes one or more multimedia technologies such as a display screen, touch input, video, audio and networking capabilities) to be adapted, both in software and physically, so as to be used for a new or enhanced purpose. | 04-26-2012 |
20120105455 | UTILIZING DOCUMENT STRUCTURE FOR ANIMATED PAGINATION - In general, this disclosure describes techniques for visually emphasizing information displayed on a computing device. In one example, a method that includes receiving a first portion of a document for display by the computing device, the first portion of the document including multiple elements separated by one or more delimiters. The method further includes dividing the multiple elements into a first set of one or more elements, each of which is displayable in its entirety at a time of display of the first portion of the document, and a second set of at least one element, the at least one element not displayable in its entirety at the time of display of the first portion of the document. The method further includes generating for display the first portion of the document, including visually emphasizing the first set of elements with respect to the second set of elements. | 05-03-2012 |
20120105456 | Interactive, multi-environment application for rich social profiles and generalized personal expression - A system and method provides to users the ability to create fully extensible, visually-dominated tokens and associated tackboards, the ability to create fully extensible, visually-dominated collections of tokens, the ability to create an adaptable, interactive, animated, visual collage that supports visualization of the collection, and the ability to create social capabilities to communicate, share, explore, and interact with other users. | 05-03-2012 |
20120105457 | TIME-DEPENDENT CLIENT INACTIVITY INDICIA IN A MULTI-USER ANIMATION ENVIRONMENT - A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time. | 05-03-2012 |
20120105458 | MULTIMEDIA INTERFACE PROGRESSION BAR - A method comprising displaying a display area adapted to display a multimedia element, displaying a progression bar adapted to represent at least a portion of a duration of the multimedia element, and providing an indicator associated with the progression bar, the indicator being displayable at a timely position along the at least a portion of a duration of the multimedia element and is further adapted to timely enable an action when the multimedia element is played, the display bar includes a movable play position indicator and wherein the action is enabled on a basis of a proximity between the indicator and the play position indicator, the action encompassing timely displaying an image. | 05-03-2012 |
20120113125 | CONSTRAINT SYSTEMS AND METHODS FOR MANIPULATING NON-HIERARCHICAL OBJECTS - Methods and apparatus for animating images using bidirectional constraints are described. | 05-10-2012 |
20120113126 | Device, Method, and Graphical User Interface for Manipulating Soft Keyboards - A method includes, at an electronic device with a display and a touch-sensitive surface: concurrently displaying a first text entry area and an unsplit keyboard on the display; detecting a gesture on the touch-sensitive surface; and, in response to detecting the gesture on the touch-sensitive surface, replacing the unsplit keyboard with an integrated input area. The integrated input area includes a left portion with a left side of a split keyboard, a right portion with a right side of the split keyboard, and a center portion in between the left portion and the right portion. | 05-10-2012 |
20120127181 | AV DEVICE - There is a problem in that, conventionally, an OSD of a channel sign is displayed in a uniform pattern regardless of any switching operations. Thus it may be difficult for a user to intuitively grasp which switching operation causes a screen display based only on information on the screen, in the case where the user watches a TV screen but not the remote controller. To solve this problem, an AV device is provided which displays an animation of the channel sign from bottom to top in the case where an up key of a broadcast reception channel is operated, for example, and a function for displaying the animation of the channel sign from right to left in the case where an external input (input channel) switching operation is performed so that the user is capable of intuitively grasping the content of the operation. | 05-24-2012 |
20120133658 | DISPLAY CONTROL APPARATUS AND METHOD FOR CONTROLLING THE SAME - A display control apparatus includes a moving image display unit configured to control a display apparatus to display a moving image thereon, a reading unit configured to read animation information including a plurality of frame images, a detection unit configured to detect touch on the display apparatus, and a display control unit configured to control the display apparatus to display a first frame image of the animation information thereon if touch on a specific position of the display apparatus is detected by the detection unit when the moving image is being displayed on the display aparatus, and to start a transition display of frame images of the animation information if the touch on the display apparatus becomes undetected by the detection unit. | 05-31-2012 |
20120139923 | WRAPPER FOR PORTING A MEDIA FRAMEWORK AND COMPONENTS TO OPERATE WITH ANOTHER MEDIA FRAMEWORK - A system comprises a media framework component graph, a first media framework, a second media framework, and a media framework translator. The media framework component graph comprises one or more components. The one or more components are coupled with the first media framework. The first media framework is designed to run the media framework component graph. The media framework translator enables the first media framework and the media framework component graph to both function as a component for the second media framework. | 06-07-2012 |
20120139924 | DYNAMIC ADAPTION OF ANIMATION TIMEFRAMES WITHIN A COMPLEX GRAPHICAL USER INTERFACE - The dynamic adaption of animation timeframes includes selecting animations to be displayed on a graphical user interface (GUI) and aligning the selected animations in a queue. An overall duration of time needed to display the selected animations in the queue is determined based on timeframes associated with the selected animations in the queue. The overall duration of time is compared with a predefined time value. If the overall duration of time is greater than the predefined time, a timeframe associated with at least one of the selected animations in the queue is reduced until the overall duration of time is less than or equal to the predefined time value. Each of the selected animations in the queue are sequentially displayed on the GUI for an amount of time that is based on the timeframes associated with the selected animations in the queue. | 06-07-2012 |
20120147012 | COORDINATION OF ANIMATIONS ACROSS MULTIPLE APPLICATIONS OR PROCESSES - Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information. | 06-14-2012 |
20120147013 | ANIMATION CONTROL APPARATUS, ANIMATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM - An animation control apparatus has: an interpolation component information creating unit ( | 06-14-2012 |
20120154407 | APPARATUS AND METHOD FOR PROVIDING SIMULATION RESULT AS IMAGE - Provided are an apparatus and method for providing a simulation result as an image. The method includes performing a simulation of a predetermined system and generating a result log of the simulation, and converting the result log of the simulation into an image on the basis of a database storing a state and operation of a model of the system as image data. Accordingly, a simulation result of a system can be provided without detailed information about the system, an additional application, or a separate storage. | 06-21-2012 |
20120154408 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a display panel, a frame, a touch sensor, and a controller. The display panel includes a display surface of a predetermined display area. The frame includes a frame surface that surrounds the display panel and determines the display area. The touch sensor is configured to detect touches to the display surface and the frame surface. The controller is configured to execute predetermined processing when a touch to a first area on the display surface is detected, and to execute the predetermined processing when a touch to a second area on the frame surface is detected, the second area being adjacent to the first area. | 06-21-2012 |
20120188253 | SIGNAGE DISPLAY SYSTEM AND PROCESS - Display apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or series of static images, or changing image sequences, from a plurality of lines of sight. | 07-26-2012 |
20120188254 | Distinguishing requests for presentation time from requests for data time - Techniques are provided for managing Presentation Time in a digital rendering system for presentation of temporally-ordered data when the digital rendering system includes a Variable Rate Presentation capability. In one embodiment, Presentation Time is converted to Data Time, and Data Time is reported instead of Presentation Time when only one time can be reported. In another embodiment, a predetermined one of Presentation Time and Data Time is returned in response to a request for a Current Time. | 07-26-2012 |
20120188255 | Framework for Graphics Animation and Compositing Operations - A framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or any other type of object for a user interface of an application. The application commits change to the state of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer for display on the processing device. Those portions of the render tree that have changed relative to prior versions can be tracked to improve resource management. | 07-26-2012 |
20120188256 | VIRTUAL WORLD PROCESSING DEVICE AND METHOD - A method and apparatus for processing a virtual world. A data structure of a virtual object of a virtual world may be defined, and a virtual world object of the virtual world may be controlled, and accordingly an object in a real world may be reflected to the virtual world. Additionally, the virtual world object may migrate between virtual worlds, using the defined data structure. | 07-26-2012 |
20120188257 | LOOPING MOTION SPACE REGISTRATION FOR REAL-TIME CHARACTER ANIMATION - A method for generating a looping motion space for real-time character animation may include determining a plurality of motion clips to include in the looping motion space and determining a number of motion cycles performed by a character object depicted in each of the plurality of motion clips. A plurality of looping motion clips may be synthesized from the motion clips, where each of the looping motion clips depicts the character object performing an equal number of motion cycles. Additionally, a starting frame of each of the plurality of looping motion clips may be synchronized so that the motion cycles in each of the plurality of looping motion clips are in phase with one another. By rendering an animation sequence using multiple passes through the looping motion space, an animation of the character object performing the motion cycles may be extended for arbitrary length of time. | 07-26-2012 |
20120194523 | Animation of Audio Ink - In a pen-based computing system, a microphone on the smart pen device records audio to produce audio data and a gesture capture system on the smart pen device records writing gestures to produce writing gesture data. Both the audio data and the writing gesture data include a time component. The audio data and writing gesture data are combined or synchronized according to their time components to create audio ink data. The audio ink data can be uploaded to a computer system attached to the smart pen device and displayed to a user through a user interface. The user makes a selection in the user interface to play the audio ink data, and the audio ink data is played back by animated the captured writing gestures and playing the recorded audio in synchronization. | 08-02-2012 |
20120200574 | TRAINING FOR SUBSTITUTING TOUCH GESTURES FOR GUI OR HARDWARE KEYS TO CONTROL AUDIO VIDEO PLAY - A user can toggle between GUI input and touch screen input with the GUI hidden using touch gestures correlated to respective hidden GUI elements and, thus, to respective commands for a TV and/or disk player sending AV data thereto. When in the GUI input mode, an animated hand can be presented on the display moving through the touch gesture corresponding to a selected GUI element to train the user on which touch gestures correspond to which GUI elements (and, thus, to respective commands for a TV and/or disk player sending AV data thereto.) | 08-09-2012 |
20120218274 | ELECTRONIC DEVICE, OPERATION CONTROL METHOD, AND STORAGE MEDIUM STORING OPERATION CONTROL PROGRAM - According to an aspect, an electronic device includes a display unit, a contact detecting unit, a housing, and a control unit. The display unit displays an image. The contact detecting unit detects a contact. The housing has a first face in which the display unit is provided and a second face in which the contact detecting unit is provided. When a contact operation is detected by the contact detecting unit while a first image is displayed on the display unit, the control unit causes the display unit to display a second image. | 08-30-2012 |
20120223952 | Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof. - The basic image specifying unit specifies the basic image of a character representing a user of the information processing device. The facial expression parameter generating unit converts the degree of the facial expression of the user to a numerical value. The model control unit determines an output model of the character for respective points of time. The moving image parameter generating unit generates a moving image parameter for generating animated moving image frames of the character for respective points of time. The command specifying unit specifies a command corresponding to the pattern of the facial expression of the user. The playback unit outputs an image based on the moving image parameter and the voice data received from the information processing device of the other user. The command executing unit executes a command based on the identification information of the command. | 09-06-2012 |
20120223953 | Kinematic Engine for Adaptive Locomotive Control in Computer Simulations - An adaptive locomotion control system is used within the physics processing of a computer simulation engine. The control system is applied to one or more ragdoll models which represent entities in a computer simulation. The control system applies state-detection, equation-of-motion, and applied-force functions to maintain the model's balance while standing still and while executing simple or complex movements. In one embodiment, the functions manipulate the model in a manner similar to the muscles of the modeled organism, particularly a human. In another embodiment, the functions apply spot forces to keep the model upright and to perform movements. | 09-06-2012 |
20120229473 | Dynamic Animation in a Mobile Device - Method and system for monitoring occurrence of an event using dynamic animation are disclosed. The method includes identifying an event to be dynamically animated, defining a set of trigger conditions of the event to be monitored, monitoring the event according to the set of trigger conditions, computing a display unit in accordance with a comparison of a status of the event to a corresponding trigger condition of the event, creating a dynamic animation for display using the display unit, and displaying the dynamic animation on a display. | 09-13-2012 |
20120229474 | FLYING EFFECTS CHOREOGRAPHY SYSTEM - A flying effects choreography system provides visualizations of flying effects within a virtual environment. The system allows choreographers to define a sequence of waypoints that identify a path of motion. A physics engine of the system may then calculate position data for a performer or other element attached to a free-swinging pendulum cable, as the performer and pendulum cable move along the path of motion. In this manner, the position data describes the motion of the performer, including the pendulum effect or swing of the performer on the pendulum cable. The position data may be used to generate one or more visualizations that show the performer's motion, including the pendulum effect. The choreographer may review the visualizations and make modifications any number of times, until a desired flying effect is produced, without having to physically implement the flying effect in the real world. | 09-13-2012 |
20120236005 | AUTOMATICALLY GENERATING AUDIOVISUAL WORKS - In one embodiment, a method comprises inferentially selecting one or more design animation modules based upon analysis of information obtained from digital visual media items and digital audio media items; and automatically creating an audiovisual work using the selected design animation modules. Audiovisual works can be automatically created based upon inferred and implicit metadata including music genre, image captions, song structure, image focal points, as well as user-supplied data such as text tags, emphasis flags, groupings, and preferred video style. | 09-20-2012 |
20120236006 | MUSCULO-SKELETAL SHAPE SKINNING - A method for use in animation includes establishing a model having a plurality of bones with muscles attached to the bones, binding skin to the muscles when the model is in a first pose with each vertex of the skin being attached at a first attachment point on a muscle, deforming the model into a second pose, and selecting a second attachment point for each vertex of the skin in the second pose. A storage medium stores a computer program for causing a processor based system to execute these steps, and a system for use in animation includes a processing system configured to execute these steps. | 09-20-2012 |
20120236007 | ANIMATION RENDERING DEVICE, ANIMATION RENDERING PROGRAM, AND ANIMATION RENDERING METHOD - An interpreter | 09-20-2012 |
20120236008 | IMAGE GENERATING APPARATUS AND IMAGE GENERATING METHOD - Disclosed are an image generating apparatus and an image generating method with which the size of a storage region required to display animated images can be suppressed. In the image generating apparatus ( | 09-20-2012 |
20120249555 | VISUAL CONNECTIVITY OF WIDGETS USING EVENT PROPAGATION - A method, system and computer program product receive a set of objects for connection, create a moving object within the set of objects, display visual connection cues on objects in the set of objects, adjust the visual connection cues of the moving object and a target object in the set of objects, identify event propagation precedence, and connect the moving object with the target object. | 10-04-2012 |
20120249556 | Methods, systems, and computer readable media for fast geometric sound propagation using visibility computations - Methods, systems, and computer program products for simulating sound propagation can be operable to define a sound source position within a modeled scene having a given geometry and construct a visibility tree for modeling sound propagation paths within the scene. Using from-region visibility techniques to model sound diffraction and from-point visibility technique to model specular sound reflections within the scene, the size of the visibility tree can be reduced. Using the visibility tree, an impulse response can be generated for the scene, and the impulse response can be used to simulate sound propagation in the scene. | 10-04-2012 |
20120249557 | SYSTEM FOR PARTICLE EDITING - A computer animation system including polygon mesh editing tools configured to edit a particle simulation by first converting a particle cash of the simulation into a polygon mesh, editing the polygon mesh and then converting the edited polygon mesh back into an edited particle cash. | 10-04-2012 |
20120256928 | Methods and Systems for Representing Complex Animation Using Scripting Capabilities of Rendering Applications - A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames. | 10-11-2012 |
20120262462 | PORTABLE ELECTRONIC DEVICE FOR DISPLAYING IMAGES AND METHOD OF OPERATION THEREOF - An electronic device having a display screen, one or more processors, and memory, and a method of operation thereof for displaying images is disclosed. The method comprises displaying a first image in a default position in a display area of the display screen. The method further comprises replacing the first image with a second image in the display area of the display screen, wherein the replacing of the first image by the second image is animated in the display area by the second image moving in from the edge of the display area to the default position and the first image simultaneously moving away from the default position with translational speed slower than that of the movement of the second image. The display screen may be a touch-sensitive display screen. The animation to replace the first image by the second image is initiated in response to a swipe gesture on the touch-sensitive display screen, and the speed of the animation may be a function of the speed of the swipe gesture. | 10-18-2012 |
20120274644 | Framework for Graphics Animation and Compositing Operations - A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or other types of objects for an application's user interface. The application commits state changes to the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility, and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer. Portions of the render tree changing relative to prior versions can be tracked to improve resource management. | 11-01-2012 |
20120274645 | ALIGNING ANIMATION STATE UPDATE AND FRAME COMPOSITION - An event, such as a vertical blank interrupt or signal, received from a display adapter in a system is identified. Activation of a timer-driven animation routine that updates a state of an animation and activation of a paint controller module that identifies updates to the state of the animation and composes a frame that includes the updates to the state of the animation are aligned, both being activated based on the identified event in the system. | 11-01-2012 |
20120281001 | METHOD FOR CONSTRUCTING BODIES THAT ROTATE IN THE SAME DIRECTION AND ARE IN CONTACT WITH ONE ANOTHER AND COMPUTER SYSTEM FOR CARRYING OUT SAID METHOD - The invention relates to a method for constructing bodies which, while rotating codirectionally about axes arranged in parallel, constantly touch one another at at least one point. | 11-08-2012 |
20120287137 | Management of Presentation Time in a Digital Media Presentation System with Variable Rate Presentation Capability - Techniques are provided for managing Presentation Time in a digital rendering system for presentation of temporally-ordered data when the digital rendering system includes a Variable Rate Presentation capability. In one embodiment, Presentation Time is converted to Data Time, and Data Time is reported instead of Presentation Time when only one time can be reported. In another embodiment, a predetermined one of Presentation Time and Data Time is returned in response to a request for a Current Time. | 11-15-2012 |
20120293517 | SYSTEM AND METHOD FOR VIDEO CHOREOGRAPHY - An electronic entertainment system for creating a video sequence by executing video game camera behavior based upon a video game sound file includes a memory configured to store an action event/camera behavior (AE/CB) database, game software such as an action generator module, and one or more sound files. In addition, the system includes a sound processing unit coupled to the memory for processing a selected sound file, and a processor coupled to the memory and the sound processing unit. The processor randomly selects an AE pointer and a CB pointer from the AE/CB database. Upon selection of the CB pointer and the AE pointer, the action generator executes camera behavior corresponding to the selected CB pointer to view an action event corresponding to the selected AE pointer. | 11-22-2012 |
20120299933 | Collection Rearrangement Animation - Collection rearrangement animation techniques are described herein, which can be employed to represent changes made by a rearrangement in a manner that reduces or eliminates visual confusion. A collection of items arranged at initial positions can be displayed. Various interaction can initiate a rearrangement of the collection of items, such as to sort the items, add or remove an item, or reposition an item. An animation of the rearrangement is depicted that omits at least a portion of the spatial travel along pathways from the initial positions to destination positions in the rearranged collection. In one approach, items can be animated to disappear from the initial positions and reappear at destination positions. This can occur by applying visual transitions that are bound to dimensional footprints of the items in the collection. Additionally or alternatively, intermediate and overlapping positions can be omitted by the animation. | 11-29-2012 |
20120299934 | Method and Apparatus for Creating a Computer Simulation of an Actor - A method for creating a computer simulation of an actor having a first foot, a second foot and a body including the steps of planting the first foot as a support foot along a space time-varying path. There is the step of stopping time regarding placement of the first foot. There is the step of changing posture of the first foot while the first foot is planted. There is the step of moving time into the future for the second foot as a lifted foot and changing posture for the lifted foot. An apparatus for creating a computer simulation of an actor having a first foot, a second foot and a body. A software program for creating a computer simulation of an actor having a first foot, a second foot and a body that performs the steps of planting the first foot as a support foot along a space time-varying path. | 11-29-2012 |
20120306889 | METHOD AND APPARATUS FOR OBJECT-BASED TRANSITION EFFECTS FOR A USER INTERFACE - A method and apparatus can provide object-based transition effects for a user interface. The method can include displaying at least one first element corresponding to a first activity on a screen of a user device. The method can include receiving a baton transition request and generating first activity baton information. The method can include displaying a first baton image corresponding to the first activity baton information and generating second activity baton information that provides visual transition information for a transition from the first activity to the second activity. The method can include transitioning the first baton image corresponding to the first activity baton information to a second image corresponding to the second activity baton information, displaying the second image corresponding to the second activity baton information, and displaying at least one second element corresponding to the second activity on the screen. | 12-06-2012 |
20120306890 | Device and Method for Dynamically Rendering an Animation - An electronic device includes a display, one or more processors, and memory storing programs for execution by the one or more processors. The programs include one or more applications and an application service module. The application service module includes instructions for, in response to receiving a triggering event from a respective application of the one or more applications, initializing an animation object with one or more respective initialization values corresponding to the triggering event. The animation object comprises an instance of a predefined animation software class. At each of a series of successive times, the device updates the animation object so as to produce a respective animation value in accordance with a predefined animation function, and renders on the display a user interface including one or more user interface objects in accordance with the respective animation value from the animation object. | 12-06-2012 |
20120306891 | Device and Method for Dynamically Rendering an Animation - A device includes one or more processors, and memory storing programs. The programs include a respective application and an application service module. The application service module includes instructions for, in response to a triggering event from the respective application, initializing an animation object with one or more respective initialization values corresponding to the triggering event. The animation object includes an instance of a predefined animation software class. At each of a series of successive times, the device updates the animation object so as to produce a respective animation value in accordance with a predefined animation function based on a primary function of an initial velocity and a deceleration rate and one or more secondary functions. The device updates a state of one or more user interface objects in accordance with the respective animation value, and renders on a display a user interface in accordance with the updated state. | 12-06-2012 |
20120313951 | TECHNIQUES FOR SYNCHRONIZING HARDWARE ACCELERATED GRAPHICS RENDERING AND SURFACE COMPOSITION - A method, a non-transitory computer readable medium having instructions recorded therein for performing the method, and processing device for rendering an animation for a screen. The method includes rendering a frame of animation of a screen, attaching a Move Surfaces at BufferSwap (MSBS) command to at least one surface to be aligned with the frame of animation, swapping the buffer of the frame of animation, updating at least one of a size and a location of the at least one surface having an attached MSBS command, and composing a scene including the contents of the at least one surface of which the at least one of the size and the location has been updated. | 12-13-2012 |
20120320066 | Modifying an Animation Having a Constraint - A computer-implemented method for handling a modification of an animation having a constraint includes detecting a user modification of an animation that involves at least first and second objects, the first object constrained to the second object during a constrained period and non-constrained to the second object during a non-constrained period. The method includes, based on the user modification, selecting one of at least first and second compensation adjustments for the animation based on a compensation policy; and adjusting the animation according to the selected compensation adjustment. | 12-20-2012 |
20120327088 | Editable Character Action User Interfaces - A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes defining at least one of a location in a virtual scene and a time represented in a timeline as being associated with a performance of an animated character. The method also includes aggregating data that represents actions of the animation character for at least one of the defined location and the defined time. The method also includes presenting a user interface that includes a representation of the aggregated actions. The representation is editable to adjust at least one action included in the aggregation. | 12-27-2012 |
20120327089 | Fully Automatic Dynamic Articulated Model Calibration - A depth sensor obtains images of articulated portions of a user's body such as the hand. A predefined model of the articulated body portions is provided. The model is matched to corresponding depth pixels which are obtained from the depth sensor, to provide an initial match. The initial match is then refined using distance constraints, collision constraints, angle constraints and a pixel comparison using a rasterized model. Distance constraints include constraints on distances between the articulated portions of the hand. Collision constraints can be enforced when the model meets specified conditions, such as when at least two adjacent finger segments of the model are determined to be in a specified relative position, e.g., parallel. The rasterized model includes depth pixels of the model which are compared to identify overlapping pixels. Dimension of the articulated portions of the model are individually adjusted. | 12-27-2012 |
20120327090 | Methods and Apparatuses for Facilitating Skeletal Animation - Methods and apparatuses for facilitating skeletal animation are provided. A method may include determining a holistic motion path for a skeletal animation. The method may further include determining, independently of the determination of the holistic motion path, a limb animation for the skeletal animation based at least in part upon a plurality of skeletal key frames. The method may additionally include generating the skeletal animation by correlating the holistic motion path with the limb animation. Corresponding apparatuses are also provided. | 12-27-2012 |
20120327091 | Gestural Messages in Social Phonebook - A method and a system is offered to enable communicating a status of a user ( | 12-27-2012 |
20130002682 | ELECTRONIC DEVICE WITH FUNCTION OF MAPPING HUMAN MOVEMENT TO DIGITAL MODEL AND METHOD THEREOF - An electronic device includes a human movement data receiving module, a digital model providing module, a human movement mapping module, and an animated image displaying module. The human movement data receiving module receives human movement data produced by the gyroscopes which are attached to different parts of a human. Each gyroscope is assigned an identifier. The digital model providing module provides a digital model whose shape is similar to the shape of the human. The digital model includes a number of portions associated with the human movement data of one of the gyroscopes. The human movement mapping module identifies the identifiers of the gyroscopes to determine the gyroscopes, and maps the human movement data produced by the determined gyroscopes onto the portions of the digital model associated with the determined gyroscopes to form an animated image. The animated image displaying module displays the animated image on the display unit. | 01-03-2013 |
20130002683 | METHOD, APPARATUS AND CLIENT DEVICE FOR DISPLAYING EXPRESSION INFORMATION - A method for displaying expression information is disclosed, which is applied to an instant messaging system and which includes following steps: a client receives or sends expression information through a chat window; the client transforms attributes of the chat window and an expression image corresponding to the expression information according to animation parameters corresponding to the expression information, and displays the animation effect obtained through the attribute transformation. An apparatus applying the above method and a client device are also provided. By using the solution, the interactivity with the expression information display can be improved. | 01-03-2013 |
20130002684 | METHODS AND APPARATUS TO DRAW ANIMATIONS - Methods and apparatus to draw animations are disclosed. An example method includes writing a first frame of information into a first buffer, writing a second frame of information into a second buffer, displaying the first frame of information, displaying the second frame of information, and updating the first buffer using a difference between the first frame of information and a third frame of information. | 01-03-2013 |
20130009963 | GRAPHICAL DISPLAY OF DATA WITH ANIMATION - An animated graphic transition is displayed to represent a data difference between data sets. A plurality of data sets is provided, and a user is presented with a plurality of options that includes selection of one or more data sets. A user selection from the plurality of options is detected. Displayed are a first graphic element that represents data in one data set and a second graphic element that represents data in another data set. An animated graphic transition is displayed in conjunction with the first and second graphic elements to represent a data difference between the selected data sets. | 01-10-2013 |
20130009964 | METHODS AND APPARATUS TO PERFORM ANIMATION SMOOTHING - Methods and apparatus to perform animation smoothing are disclosed. An example method includes determining an estimated drawing time associated with each of a plurality of frames of an animation, calculating a metric based on the estimated drawing time associated with each of the plurality of frames, and updating an assumed frame time based on the metric. | 01-10-2013 |
20130021347 | SYSTEMS AND METHODS FOR FINANCIAL PLANNING USING ANIMATION - CiFiCo (Cinematic Financial Concepts) simplifies finance concept by taking information and “cinematizing” it into fun, simple, engaging, moving visual representations (aka “movies”) accompanied by sound and touch control. Movies contain various assets, incomes, and insurance, as well as intergenerational timelines. CiFiCo can demonstrate the impact of asset accumulation, distribution, taxes, insurance, investments, intergenerational transfers, and other concepts. The tool allows individuals to gain a unique perspective on how the financial decisions they make (past, present and future) can impact their overall financial picture (movie). CiFiCo can illustrate and factor for contributions and distributions, as well as risks or attacks that may draw against one's financial stability (e.g., death, disabilities, long term care costs, lawsuits, natural disasters, market volatility, etc.). The application can illustrate a single financial concept, compare several financial strategies, or portray a fully integrated, multi-generational, financial plan. | 01-24-2013 |
20130021348 | SYSTEMS AND METHODS FOR ANIMATION RECOMMENDATIONS - Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation. | 01-24-2013 |
20130027407 | FLUID DYNAMICS FRAMEWORK FOR ANIMATED SPECIAL EFFECTS - An animated special effect is modeled using a fluid dynamics framework system. The fluid dynamics framework for animated special effects system accepts volumetric data as input. Input volumetric data may represent the initial state of an animated special effect. Input volumetric data may also represent sources, sinks, external forces, and/or other influences on the animated special effect. In addition, the system accepts input parameters related to fluid dynamics modeling. The input volumes and parameters are applied to the incompressible Navier-Stokes equations as modifications to the initial state of the animated special effect, as modifications to the forcing term of a pressure equation, or in the computations of other types of forces that influence the solution. The input volumetric data may be composited with other volumetric data using a scalar blending field. The solution of the incompressible Navier-Stokes equations models the motion of the animated special effect. | 01-31-2013 |
20130027408 | Systems and Methods for Webpage Adaptive Rendering - This disclosure describes systems, methods, and apparatus for rendering animated images on a webpage. In particular, animated images that are visible are rendered as animations, whereas animated images that are not visible, those that can only be seen by scrolling the webpage, are rendered as a single static image until the webpage is scrolled such that these animated images are visible. At such point they can be rendered as animations. | 01-31-2013 |
20130027409 | SYSTEMS AND METHODS FOR RESOURCE PLANNING USING ANIMATION - A system and method of presenting resource information for an entity includes receiving input data associated with the resource information of the entity, generating, by a computer, an animated representation of the resource information along one or more determined timelines employing a plurality of graphical characters based on the input data and displaying the animated representation. The creation of one simple animated visual language may reduce the mass confusion typically associated with the relation of resources including financial and other concepts, saving time and money and better educating those seeking recommendations and advice regarding resource planning, including financial planning. | 01-31-2013 |
20130033499 | STATIONARY OR MOBILE TERMINAL CONTROLLED BY A POINTING OR INPUT PERIPHERAL - A stationary or mobile terminal controlled by a pointing or input peripheral device is presented. The invention pertains to the field of man-machine interfaces (MMI) applied to digital reading. There is provided a stationary or mobile terminal that is capable of reproducing, when used, the sensation of reading paper on a screen, of developing novel modes of reading, and of enabling press groups to render the publications thereof paperless while doing away with the material and technical limitations of various reading terminals. | 02-07-2013 |
20130038613 | METHOD AND APPARATUS FOR GENERATING AND PLAYING ANIMATED MESSAGE - Methods and apparatus are provided for generating an animated message. Input objects in an image of the animated message are recognized, and input information, including information about an input time and input coordinates for the input objects, is extracted. Playback information, including information about a playback order of the input objects, is set. The image is displayed in a predetermined handwriting region of the animated message. An encoding region, which is allocated in a predetermined portion of the animated message and in which the input information and the playback information are stored, is divided into blocks having a predetermined size. Display information of the encoding region is generated by mapping the input information and the playback information to the blocks in the encoding region. An animated message including the predetermined handwriting region and the encoding region is generated. The generated animated message is transmitted. | 02-14-2013 |
20130044115 | DISPLAY DEVICE, DISPLAY CONTROL METHOD, PROGRAM, AND COMPUTER READABLE RECORDING MEDIUM - To provide a display device capable of displaying a trajectory of a specific portion of a program controlled control target device regardless of whether the control program is a simple sequential execution type or a situation adaptive type. A PC ( | 02-21-2013 |
20130044116 | SYSTEM AND METHOD FOR CONTROLLING ANIMATION BY TAGGING OBJECTS WITHIN A GAME ENVIRONMENT - A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest. | 02-21-2013 |
20130057555 | Transition Animation Methods and Systems - An exemplary method includes a transition animation system detecting a screen size of a display screen associated with a computing device executing an application, automatically generating, based on the detected screen size, a plurality of animation step values each corresponding to a different animation step included in a plurality of animation steps that are to be involved in an animation of a transition of a user interface associated with the application into the display screen, and directing the computing device to perform the plurality of animation steps in accordance with the generated animation step values. Corresponding methods and systems are also disclosed. | 03-07-2013 |
20130063443 | Tile Cache - Tile cache techniques are described. In at least some embodiments, a tile cache is maintained that stores tile content for a plurality of tiles. The tile content is ordered in the tile cache to match a visual order of tiles in a graphical user interface. When tiles are moved (e.g., panned and/or scrolled) in the graphical user interface, tile content can be retrieved from the tile cache and displayed. | 03-14-2013 |
20130063444 | Aligning Script Animations with Display Refresh - Various embodiments align callbacks to a scripting component that enable the scripting component to update animation, with a system's refresh notifications. Specifically, an application program interface (API) is provided and implemented in a manner that generates and issues a callback to the scripting component when the system receives a refresh notification. This provides the scripting component with a desirable amount of time to run before the next refresh notification. | 03-14-2013 |
20130063445 | Composition System Thread - Composition system thread techniques are described. In one or more implementations, a composition system may be configured to compose visual elements received from applications on a thread that is executed separately than a user interface thread of the applications. As such, the composition system may execute asynchronously from a user interface thread of the application. Additionally, the composition system may be configured to expose one or more application programming interfaces (APIs) that are accessible to the applications. The APIs may be used for constructing a tree of objects representing the operations that are to be performed to compose one or more bitmaps. Further, these operations may be controlled by several API visual properties to allow applications to animate content within their windows and use disparate technologies to rasterize such content. | 03-14-2013 |
20130063446 | Scenario Based Animation Library - Various embodiments provide a library of animation descriptions based upon various common user interface scenarios. Application developers can query the animation library for animations based on a user's interaction with the user interface. The library defines usage of transformation primitives, storyboarding of the transformation primitives and associated timing functions that are used to create particular animations. These definitions can be provided to a calling application so that the application can implement an animation that utilizes the storyboarded transformation primitives. | 03-14-2013 |
20130063447 | INFORMATION PROCESSING DEVICE, IMAGE TRANSMISSION METHOD AND IMAGE TRANSMISSION PROGRAM - An information processing device includes a memory; and a processor coupled to the memory, wherein the processor executes a process comprising: drawing an image representing a processing result based on software into an image memory; identifying a high-frequency change area; animating an image of the high-frequency change area; adding time information to an image of a change area having a change or the image of the high-frequency change area animated, and transmits the image to a terminal device; receiving the time information from the terminal device; determining based on a difference between the received time information and a reception time of the time information whether image drawing delay occurs; and starting an animation of the change area when the image drawing delay occurs and the animation is not being executed or changes an image transmission interval when the image drawing delay occurs and the animation is being executed. | 03-14-2013 |
20130063448 | Aligning Script Animations with Display Refresh - Various embodiments align callbacks to a scripting component that enable the scripting component to update animation, with a system's refresh notifications. Specifically, an application program interface (API) is provided and implemented in a manner that generates and issues a callback to the scripting component when the system receives a refresh notification. This provides the scripting component with a desirable amount of time to run before the next refresh notification. | 03-14-2013 |
20130069954 | Method of Transforming Time-Based Drawings and Apparatus for Performing the Same - A method performed by data processing apparatus, the method including: rendering a first object on a display, the first object having first location coordinates and first temporal coordinates, in which the first location coordinates define a first drawing and each first location coordinate is associated with a respective temporal coordinate; receiving input defining a second object, the second object having second location coordinates and second temporal coordinates, in which the second location coordinates define a second drawing and each second location coordinate is associated with a respective second temporal coordinate; applying a transformation to the first location coordinate(s) responsive to receiving each second location coordinate, based on a most recently received second location coordinate and generating an animation by rendering the transformed first location coordinate(s) on the display according to the respective first temporal coordinates. | 03-21-2013 |
20130069955 | Hierarchical Representation of Time - A method performed by a data processing apparatus, in which the method includes selecting an object in which the object represents input defining a drawing, each drawing comprising location coordinates and temporal coordinates, in which each location coordinate is associated with a respective temporal coordinate; associating the object with a respective clock in a hierarchy of clocks, each clock in the hierarchy having a respective rate of progression that is coupled to the rate of progression of one or more parent clocks in the hierarchy; and generating an animation by drawing the location coordinates according to the rate of progression of the clock associated with the object. Other embodiments of this aspect include corresponding computing platforms and computer program products. | 03-21-2013 |
20130069956 | Transforming Time-Based Drawings - A method performed by a data processing apparatus, in which the method includes determining multiple first temporal coordinates, while receiving input defining a drawing from an input device, the drawing including multiple first object location coordinates received during a time period, in which each first temporal coordinate is based on a time when a respective one of the first object coordinates was received, receiving an input defining an animation period, applying a transformation to the first temporal coordinates to provide multiple transformed temporal coordinates respectively corresponding to the first image location coordinates, and periodically generating, based on the animation period, an animation by drawing the first object location coordinates according to the respective transformed temporal coordinates. Other embodiments of this aspect include corresponding computing platforms and computer program products. | 03-21-2013 |
20130069957 | Method and Device for Playing Animation and Method and System for Displaying Animation Background - Embodiments of the present invention provide a method and device for playing an animation, belonging to a communication technology field. The method includes obtaining a first attribute value of an animation object of the current moment when an audio signal is detected, and determining a second attribute value and a first speed value corresponding to the audio signal; taking the first attribute value and second attribute value respectively as a starting point and end point, and playing the animation object according to the first speed value; and stopping, when the audio signal stops, playing the animation object if the playing of the animation object does not end. The device includes an audio starting animation playing module and an audio ending animation playing module. In embodiments, the playing of the animation is achieved through detecting the audio signal and playing the animation object combing the audio signal, which achieves the effect of the animation and enriches the displaying effect. | 03-21-2013 |
20130076755 | GENERAL REPRESENTATIONS FOR DATA FRAME ANIMATIONS - Multiple data frames can be processed to produce a general animation representation that represents the data frames. The general animation representation may be in a general language that is suitable for being translated into any of multiple different specific languages. The general animation representation can be translated into a specific animation representation that is in a specific language suitable for processing by a rendering environment. The specific animation representation can be sent to the rendering environment, where the specific animation representation can be rendered on a display device. | 03-28-2013 |
20130076756 | DATA FRAME ANIMATION - Data can be received from a first data source that is a first type of data source, and data can be received from a second data source that is a second type of data source. Data frames can be processed to produce an animation representation that represents the data frames. The data frames can include the data from the first data source and the data from the second data source. The animation representation can include one or more key animation frames that each defines a full graphical representation of one of the data frames. The animation representation can also include one or more delta animation frames that each defines one or more graphical updates without defining a full graphical representation of one of the data frames. The animation representation may be sent to a rendering environment for rendering. | 03-28-2013 |
20130076757 | PORTIONING DATA FRAME ANIMATION REPRESENTATIONS - Multiple portions of a set of data frames can be processed to produce portions of an animation representation. Each of the portions of the set of data frames can be processed to produce a corresponding portion of the animation representation that represents one or more changes during a portion of an animation sequence in an animation of the set of data frames. The animation representation can be sent to a rendering environment. Sending the animation representation to the rendering environment can include sending each of the portions of the animation representation in a separate batch. Each portion of the animation representation can be formatted to be rendered before receiving all portions of the animation representation at the rendering environment. | 03-28-2013 |
20130076758 | Page Switching Method And Device - A page switching method and device. The method includes: displaying current message page; when detecting a touch operation, drawing a page-turning animation according to the touch operation, and playing the page-turning animation; and when the touch operation stops, displaying an adjacent message page. | 03-28-2013 |
20130076759 | MULTI-LAYERED SLIDE TRANSITIONS - Architecture that enhances the visual experience of a slide presentation by animating slide content as “actors” in the same background “scene”. This is provided by multi-layered transitions between slides, where a slide is first separated into “layers” (e.g., with a level of transparency). Each layer can then be transitioned independently. All layers are composited together to accomplish the end effect. The layers can comprise one or more content layers, and a background layer. The background layer can further be separated into a background graphics layer and a background fill layer. The transition phase can include a transition effect such as a fade, a wipe, a dissolve effect, and other desired effects. To provide the continuity and uniformity of presentation the content on the same background scene, a transition effect is not applied to the background layer. | 03-28-2013 |
20130083034 | ANIMATION ENGINE DECOUPLED FROM ANIMATION CATALOG - Embodiments provide animations with an animation engine decoupled from an animation catalog storing animation definitions. A computing device accesses at least one of the animation definitions corresponding to at least one markup language (ML) element to be animated. Final attribute values associated with the ML element are identified (e.g., provided by the caller or defined in the animation definition). The computing device animates the ML element using the accessed animation definition and the identified final attribute values. In some embodiments, the animation engine uses a single timer to animate a plurality of hypertext markup language (HTML) elements displayed by a browser. | 04-04-2013 |
20130083035 | GRAPHICAL SYMBOL ANIMATION WITH EVALUATIONS FOR BUILDING AUTOMATION GRAPHICS - Automation systems, methods, and mediums. A method includes identifying a value for a data point associated with a device in a building. The value is received from a management system operably connected to the device. The method includes mapping the value for the data point to a graphical representation of the value for the data point. The method includes generating a display comprising a graphic for the building and a symbol representing the device. The method includes displaying the graphical representation of the value for the data point in association with the symbol representing the device. Additionally, the method includes modifying the graphical representation of the value based on a change in the value in response to identifying the change in the value from the management system. | 04-04-2013 |
20130083036 | METHOD OF RENDERING A SET OF CORRELATED EVENTS AND COMPUTERIZED SYSTEM THEREOF - An automated rendering system for creating a screenplay or a transcript is provided that includes an audio/visual (A/V) content compositor and renderer for composing audio/visual (A/V) content made up of at clips and animations, and at least one of: back ground music, still images, or commentary phrases. A transcript builder is provided to build a transcript. The transcript builder utilizes data in various forms including user situational inputs, predefined rules and scripts, game action text, logical determinations and intelligent assumptions to generate a transcript to produce the A/V content of the screenplay or the transcript. A method is also provided for rendering an event that include receiving data with a request from a user to generate an audio/visual (A/V) presentation based on the event using the system. Ancillary data input is provided as a set of rules that influence or customize the outcome of the screenplay. | 04-04-2013 |
20130088497 | MULTIPOINT OFFSET SAMPLING DEFORMATION - A skin deformation system for use in computer animation is disclosed. The skin deformation system accesses the skeleton structure of a computer generated character, and accesses a user's identification of features of the skeleton structure that may affect a skin deformation. The system also accesses the user's identification of a weighting strategy. Using the identified weighting strategy and identified features of the skeleton structure, the skin deformation system determines the degree to which each feature identified by the user may influence the deformation of a skin of the computer generated character. The skin deformation system may incorporate secondary operations including bulge, slide, scale and twist into the deformation of a skin Information relating to a deformed skin may be stored by the skin deformation system so that the information may be used to produce a visual image for a viewer. | 04-11-2013 |
20130093774 | CLOUD-BASED ANIMATION TOOL - A cloud-based animation tool may improve the graphics capabilities of low-cost devices. A web service may allow a user to submit a text string from the device for animation by the cloud-based tool. The string may be parsed by a natural language processor into components such as nouns and verbs. The parsed words may be cross-referenced to content through a reference database, including instructions for verbs and images for nouns. An animation may be created from the images corresponding to the nouns and instructions corresponding to the verbs. The animation may be rendered for display and may be transmitted to the user through the web service. The cloud-based animation tool may improve access to educational material for students accessing content through low-cost devices made available through the one-computer-per-child program. | 04-18-2013 |
20130100140 | HUMAN BODY AND FACIAL ANIMATION SYSTEMS WITH 3D CAMERA AND METHOD THEREOF - An animation system integrating face and body tracking for puppet and avatar animation by using a 3D camera is provided. The 3D camera human body and facial animation system includes a 3D camera having an image sensor and a depth sensor with same fixed focal length and image resolution, equal FOV and aligned image center. The system software of the animation system provides on-line tracking and off-line learning functions. An algorithm of object detection for the on-line tracking function includes detecting and assessing a distance of an object; depending upon the distance of the object, the object can be identified as a face, body, or face/hand so as to perform face tracking, body detection, or ‘face and hand gesture’ detection procedures. The animation system can also have zoom lens which includes an image sensor with an adjustable focal length f′ and a depth sensor with a fixed focal length f. | 04-25-2013 |
20130100141 | SYSTEM AND METHOD OF PRODUCING AN ANIMATED PERFORMANCE UTILIZING MULTIPLE CAMERAS - A real-time method for producing an animated performance is disclosed. The real-time method involves receiving animation data, the animation data used to animate a computer generated character. The animation data may comprise motion capture data, or puppetry data, or a combination thereof. A computer generated animated character is rendered in real-time with receiving the animation data. A body movement of the computer generated character may be based on the motion capture data, and a head and a facial movement are based on the puppetry data. A first view of the computer generated animated character is created from a first reference point. A second view of the computer generated animated character is created from a second reference point that is distinct from the first reference point. One or more of the first and second views of the computer generated animated character are displayed in real-time with receiving the animation data. | 04-25-2013 |
20130100142 | INTERFACING WITH A SPATIAL VIRTUAL COMMUNICATION ENVIRONMENT - A spatial layout of zones of a virtual area in a network communication environment is displayed. A user can have a respective presence in each of one or more of the zones. Navigation controls and interaction controls are presented. The navigation controls enable the user to specify where to establish a presence in the virtual area. The interaction controls enable the user to manage interactions with one or more other communicants in the network communication environment. A respective presence of the user is established in each of one or more of the zones on response to input received via the navigation controls. Respective graphical representations of the communicants are depicted in each of the zones where the communicants respectively have presence. | 04-25-2013 |
20130106866 | LAYERING ANIMATION PROPERTIES IN HIGHER LEVEL ANIMATIONS | 05-02-2013 |
20130106867 | METHOD AND APPARATUS FOR GENERATING AN AVATAR | 05-02-2013 |
20130113807 | User Interface for Controlling Animation of an Object - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions. | 05-09-2013 |
20130120398 | INPUT DEVICE AND METHOD FOR AN ELECTRONIC APPARATUS - The present specification teaches an input device and method for electronic apparatus. The input device can be based on one or more force sensitive input devices, such as force sensitive resistors. The electronic apparatus includes an output device such as a display. A processor is configured to receive input from the input device and to control the display or other output device. In certain implementations, the display is controlled to generate a first graphical object that is associated with an instruction. The processor is configured to generate a second graphical object in response to an input received from the force sensitive input device that corresponds with the instruction. | 05-16-2013 |
20130120399 | METHOD, APPARATUS, COMPUTER PROGRAM AND USER INTERFACE - A method, apparatus, computer program and user interface wherein the method comprises displaying a still image on a display; detecting user selection of a portion of the still image; and in response to the detection of the user selection, replacing the selected portion of the image with a moving image and maintaining the rest of the still image, which has not been selected, as a still image. | 05-16-2013 |
20130120400 | ANIMATION CREATION AND MANAGEMENT IN PRESENTATION APPLICATION PROGRAMS - An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated. | 05-16-2013 |
20130120401 | Animation of Computer-Generated Display Components of User Interfaces and Content Items - Animation of computer-generated display components of user interfaces and content items is provided. An animation application or engine creates images of individual display components (e.g., bitmap images) and places those images on animation layers. Animation behaviors may be specified for the layers to indicate how the layers and associated display component images animate or behave when their properties change (e.g., a movement of an object contained on a layer), as well as, to change properties on layers in order to trigger animations (e.g., an animation that causes an object to rotate). In order to achieve high animation frame rates, the animation application may utilize three processing threads, including a user interface thread, a compositor thread and a rendering thread. Display behavior may be optimized and controlled by utilizing a declarative markup language, such as the Extensible Markup Language, for defining display behavior functionality and properties. | 05-16-2013 |
20130120402 | Discarding Idle Graphical Display Components from Memory and Processing - Memory storage and processing for idle computer-generated graphical display components are discarded for conserving memory capacity, processing resources and power consumption. If a computer-generated display frame goes idle for a prescribed duration, for example, 30 seconds, wherein no user action or processor action is performed on the idle display frame, stored data representing the idle display frame is discarded from memory and processing for the idle display component is ceased, thus conserving memory space, processing resources and power consumption (e.g., battery power). If the discarded display frame becomes active again, its discarded resources may be recreated. Alternatively, an idle display component may be passed to a separate application and may be reclaimed by a requiring application when the idle display component becomes active again. | 05-16-2013 |
20130120403 | ANIMATION CREATION AND MANAGEMENT IN PRESENTATION APPLICATION PROGRAMS - An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated. | 05-16-2013 |
20130127873 | System and Method for Robust Physically-Plausible Character Animation - An interactive application may include a quasi-physical simulator configured to determine the configuration of animated characters as they move within the application and are acted on by external forces. The simulator may work together with a parameterized animation module that synthesizes and provides reference poses for the animation from example motion clips that it has segmented and parameterized. The simulator may receive input defining a trajectory for an animated character and input representing one or more external forces acting on the character, and may perform a quasi-physical simulation to determine a pose for the character in the current animation frame in reaction to the external forces. The simulator may enforce a goal constraint that the animated character follows the trajectory, e.g., by adding a non-physical force to the simulation, the magnitude of which may be dependent on a torque objective that attempts to minimize the use of such non-physical forces. | 05-23-2013 |
20130127874 | Physical Simulation Tools For Two-Dimensional (2D) Drawing Environments - Methods and apparatus for simulating various physical effects on 2D objects in two-dimensional (2D) drawing environments. A set of 2D physical simulation tools may be provided for editing and enhancing 2D art based on 2D physical simulations. Each 2D physical simulation tool may be associated with a particular physical simulator that may be applied to 2D objects in an image using simple and intuitive gestures applied with the respective tool. In addition, predefined materials may be specified for a 2D object to which a 2D physical simulation tool may be applied. The 2D physical simulation tools may be used to simulate physical effects in static 2D images and to generate 2D animations of the physical effects. Computing technologies may be leveraged so that the physical simulations may be executed in real-time or near-real-time as the tools are applied, thus providing immediate feedback and realistic visual effects. | 05-23-2013 |
20130127875 | Value Templates in Animation Timelines - Methods and systems for animation timelines using value templates are disclosed. In some embodiments, a method includes generating a data structure corresponding to a graphical representation of a timeline and creating an animation of an element along the timeline, where the animation modifies a property of the element according to a function, and where the function uses a combination of a string with a numerical value to render the animation. The method also includes adding a command corresponding to the animation into the data structure, where the command is configured to return the numerical value, and where the data structure includes a value template that produces the combination of the string with the numerical value. The method further includes passing the produced combination of the string with the numerical value to the function and executing the function to animate the element. | 05-23-2013 |
20130127876 | GRAPHIC DISPLAY APPARATUS - A graphic display apparatus within an automotive vehicle wherein the display apparatus includes at least two display units operable to display graphics and/or video, a wire connector connecting the at least two display units together, and a control system connected to the wire connector wherein the control system is operable to play video or graphics on the at least two display units. A method is provided to all the system to be universal for both audio and navigation systems wherein each system calls for a predetermined delay of the animation. The display units are in communication with one another providing for a coordinated or synchronized display of graphics. If, by way of example, a firework explodes on the main display screen, the remnants of that single firework will be exploded onto the secondary display screens. | 05-23-2013 |
20130135315 | METHOD, SYSTEM AND SOFTWARE PROGRAM FOR SHOOTING AND EDITING A FILM COMPRISING AT LEAST ONE IMAGE OF A 3D COMPUTER-GENERATED ANIMATION - Method for shooting and editing a film comprising at least one image of a 3D computer-generated animation created by a cinematographic software according to mathematical model of elements that are part of the animation and according to a definition of situations and actions occurring for said elements as a function of time, said method being characterized by comprising the following: computing of alternative suggested viewpoints by the cinematographic software for an image of the 3D computer-generated animation corresponding to a particular time point according to said definition; and instructing for displaying on a display interface, all together, images corresponding to said computed alternative suggested viewpoints of the 3D computer-generated animation at that particular time point. | 05-30-2013 |
20130141439 | METHOD AND SYSTEM FOR GENERATING ANIMATED ART EFFECTS ON STATIC IMAGES - A method and system for generating animated art effects while viewing static images, where the appearance of effects depends upon on the content of an image and parameters of accompanying sound is provided. The method of generating animated art effects on static images, based on the static image and accompanying sound feature analysis, includes storing an original static image; detecting areas of interest on the original static image and computing features of the areas of interest; creating visual objects of art effects according to the features detected in the areas of interest; detecting features of an accompanying sound; modifying parameters of visual objects in accordance with the features of the accompanying sound; and generating a frame of an animation including the original static image with superimposed visual objects of art effects. | 06-06-2013 |
20130141440 | OPERATION SEQUENCE DISPLAY METHOD AND OPERATION SEQUENCE DISPLAY SYSTEM - Disclosed are an operation sequence display method and an operation sequence display system, wherein operation scenes to attach or remove one or a plurality of components are displayed by switching the scenes. And in at least one operation scene, the attachment or removal target components are displayed in a different manner from other components by changing gray scales using a single color, marking displays for emphasizing operation portions of the target components or the moving directions of the target components in the screen are blinked at a constant interval, and after the marking displays are blinked, the operations to the operation portions or the movements of the target components are displayed by animation, and displays regarding the operations to the operation portions or the movements of the target components are performed at a constant rhythm. | 06-06-2013 |
20130147810 | APPARATUS RESPONSIVE TO AT LEAST ZOOM-IN USER INPUT, A METHOD AND A COMPUTER PROGRAM - A method, apparatus, computer program and user interface wherein the method comprises displaying a still image on a display; detecting user selection of a portion of the still image; and in response to the detection of the user selection, replacing the selected portion of the image with a moving image and maintaining the rest of the still image, which has not been selected, as a still image. | 06-13-2013 |
20130147811 | SELECTION OF ANIMATION DATA FOR A DATA-DRIVEN MODEL - A set of animation data for an element in an animation is statistically sampled to obtain a common context. The common context is a subset of a plurality of frames of the set of animation data. Further, output of a data-driven model for the animation, which utilizes at least a subset of the common context, is compared with output of a computational model for the animation. The computational model has a first set of logic. The data-driven model has a second set of logic that has less logic than the first set of logic. In addition, an error between the computational model and the data-driven model is computed. | 06-13-2013 |
20130147812 | HEATING, VENTILATION AND AIR CONDITIONING SYSTEM USER INTERFACE HAVING PROPORTIONAL ANIMATION GRAPHICS AND METHOD OF OPERATION THEREOF - A user interface for use with an HVAC system, a method of providing service reminders on a single screen of a user interface of an HVAC system and an HVAC system incorporating the user interface or the method. In one embodiment, the user interface includes: (1) a display configured to provide information to a user, (2) a touchpad configured to accept input from the user and (3) a processor and memory coupled to the display and the touchpad and configured to drive the display, the display further configured to display proportional animation graphics corresponding to attributes of the HVAC system. | 06-13-2013 |
20130155071 | Document Collaboration Effects - Various features and processes related to document collaboration are disclosed. In some implementations, animations are presented when updating a local document display to reflect changes made to the document at a remote device. In some implementations, a user can selectively highlight changes made by collaborators in a document. In some implementations, a user can select an identifier associated with another user to display a portion of a document that includes the other user's cursor location. In some implementations, text in document chat sessions can be automatically converted into hyperlinks which, when selected, cause a document editor to perform an operation. | 06-20-2013 |
20130155072 | ELECTRONIC DEVICE AND METHOD FOR MANAGING FILES USING THE ELECTRONIC DEVICE - A method for managing files using an electronic device determines a target storage path in the electronic device in response to detecting a selection operation of the target storage path on a touch panel of the electronic device. A copy operation on a target file displayed on the touch panel is detected, while the selection operation is being implemented. The target file is stored at the target storage path, and an animated cartoon is output on the touch panel to represent a process of storing the target file to the target storage path. | 06-20-2013 |
20130162653 | Creating Animations - Animation creation is described, for example, to enable children to create, record and play back stories. In an embodiment, one or more children are able to create animation components such as characters and backgrounds using a multi-touch panel display together with an image capture device. For example, a graphical user interface is provided at the multi-touch panel display to enable the animation components to be edited. In an example, children narrate a story whilst manipulating animation components using the multi-touch display panel and the sound and visual display is recorded. In embodiments image analysis is carried out automatically and used to autonomously modify story components during a narration. In examples, various types of handheld view-finding frames are provided for use with the image capture device. In embodiments saved stories can be restored from memory and retold from any point with different manipulations and narration. | 06-27-2013 |
20130169647 | DISPLAYING PARTIAL LOGOS - A processor serves instructions that set a position of a banner having a shadow line and a position of an image of a partial logo having a line crossing at least part of the partial logo, wherein the position of the image of the partial logo is set based on the position of the banner, a dimension of the banner, and a position of the line crossing the partial logo. An image of the banner having the shadow line is retrieved and served. An image of the partial logo is retrieved and served. The rendered banner and partial logo display the partial logo such that the line crossing the partial logo is aligned with a shadow line on the banner. | 07-04-2013 |
20130169648 | CUMULATIVE MOVEMENT ANIMATIONS - Cumulative movement animation techniques are described. In one or more implementations, an output a first animation is initiated that involves a display of movement in a user interface of a computing device. An input is received by the computing device during the output of the first animation, the input configured to cause a second display of movement in the user interface. Responsive to the receipt of the input, a remaining portion of the movement of the first animation is output along with the movement of the second animation by the computing device. | 07-04-2013 |
20130169649 | MOVEMENT ENDPOINT EXPOSURE - Movement endpoint exposure techniques are described. In one or more implementations, an input is received by a computing device to cause output of an animation involving movement in a user interface. Responsive to the receipt of the input, an endpoint is exposed to software of the computing device that is associated with the user interface, such as applications and controls. The endpoint references a particular location in the user interface at which the animation is calculated to end for the input. | 07-04-2013 |
20130187927 | Method and System for Automated Production of Audiovisual Animations - The present invention relates to a computer-implemented method for the automated production of an audiovisual animation, in particular a tutorial video, wherein the method comprises the following steps:
| 07-25-2013 |
20130187928 | SIGNAGE DISPLAY SYSTEM AND PROCESS - Display apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or series of static images, or changing image sequences, from a plurality of lines of sight. | 07-25-2013 |
20130187929 | VISUAL REPRESENTATION EXPRESSION BASED ON PLAYER EXPRESSION - Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time. | 07-25-2013 |
20130187930 | METHOD AND SYSTEM FOR INTERACTIVE SIMULATION OF MATERIALS AND MODELS - A method and system for drawing, displaying, editing animating, simulating and interacting with one or more virtual polygonal, spline, volumetric models, three-dimensional visual models or robotic models. The method and system provide flexible simulation, the ability to combine rigid and flexible simulation on plural portions of a model, rendering of haptic forces and force-feedback to a user. | 07-25-2013 |
20130194278 | PORTABLE VIRTUAL CHARACTERS - Described herein are methods, systems, apparatuses and products for portable virtual characters. One aspect provides a method including: providing a virtual character on a first device, the virtual character having a plurality of attributes allowing for a plurality of versions of the virtual character; providing a device table listing at least one device having at least one attribute; transferring to a second device information to permit an instantiation of the virtual character on the second device, the instantiation of the virtual character on the second device including at least one attribute matching at least one attribute of the second device as determined from the device table; and receiving virtual character information from the second device related to the at least one attribute of the second device to permit updating of the virtual character on the first device. Other embodiments are disclosed. | 08-01-2013 |
20130194279 | OPTIMIZING GRAPH EVALUATION - A system for performing graphics processing is disclosed. A dependency graph comprising interconnected nodes is accessed. Each node has output attributes and the dependency graph receives input attributes. A first list is accessed, which includes a dirty status for each dirty output attribute of the dependency graph. A second list is accessed, which associates one of the input attributes with output attributes that are affected by the one input attribute. A third list is accessed, which associates one of the output attributes with output attributes that affect the one output attribute. An evaluation request for a requested output attribute is received. A set of output attributes are selected for evaluation based on being specified in the first list as dirty and being specified in the third list as associated with the requested output attribute. The set of output attributes are evaluated. | 08-01-2013 |
20130194280 | SYSTEM AND METHOD FOR PROVIDING AN AVATAR SERVICE IN A MOBILE ENVIRONMENT - Provided is an avatar service system and method for providing an avatar in a service provided in a mobile environment. The avatar service system may include a request receiving unit to receive a request for the avatar to perform an action, an image data selecting unit to select image data and metadata for body layers forming a body of the avatar in response to the request, and based on the selected body image data to further select image data for a plurality of item layers disposed on the body of the avatar, and an avatar action processing unit to generate action data for applying the action of the avatar based on the selected image data and metadata. | 08-01-2013 |
20130201194 | METHOD AND APPARATUS FOR PLAYING AN ANIMATION IN A MOBILE TERMINAL - A method and apparatus are provided for playing an animation in a mobile terminal. The method includes displaying content; determining an object of an animation from the content; determining whether an interaction event occurs while displaying the content; and playing an animation of the determined object, when the interaction event occurs. | 08-08-2013 |
20130207981 | APPARATUS AND METHODS FOR CURSOR ANIMATION - Methods and systems are provided for animating a cursor image. In an exemplary embodiment, image data for the cursor image maintained by a first memory is provided for display on a display device, and that image data is written to a second memory while being provided from the first memory for display. Prior to writing new image data for a portion of the cursor image to the first memory, the image data maintained by the second memory is provided for display on the display device and the new image data is written to the first memory while the image data maintained by the second memory is being provided to the display. | 08-15-2013 |
20130207982 | Method and System for Rendering an Application View on a Portable Communication Device - Disclosed are a method and a system for rendering an application view on a portable communication device. The method includes a step | 08-15-2013 |
20130222395 | Dynamic Splitting of Content - Methods and systems for dynamically splitting content are disclosed. In some embodiments, content may be received that includes one or more elements to be animated. It may be determined that a size of at least one element of the one or more elements to be animated exceeds a threshold. The at least one element having the size that exceeds the threshold may be split into a plurality of sub-elements. A transform of at least one of the sub-elements may be modified. | 08-29-2013 |
20130222396 | SYSTEM AND METHOD FOR CREATING AND DISPLAYING AN ANIMATED FLOW OF TEXT AND OTHER MEDIA FROM AN INPUT OF CONVENTIONAL TEXT - A system and method for generating and displaying text on a screen as an animated flow from a digital input of conventional text. The Invention divides text into short-scan lines of coherent semantic value that progressively animate from invisible to visible and back to invisible. Multiple line displays are frequent. The effect is aesthetically engaging, perceptually focusing, and cognitively immersive. The reader watches the text like watching a movie. The Invention may exist in whole or in part as a standalone application on a specific screen device. The Invention includes a manual authoring tool that allows the insertion of non-text media such as sound, image, and advertisements. | 08-29-2013 |
20130235043 | Systems and Methods for Creating, Displaying, and Using Hierarchical Objects with Rigid Bodies - Methods involving the creation and use of rigid bodies with hierarchical objects are disclosed. For example, certain embodiments facilitate the creation, display, and/or editing of a rigid body of a hierarchical object. One embodiment displays a hierarchical object including a rigid body and allows inverse kinematics-based movement of the hierarchical object to control movement of the rigid body. For example, the position and/or rotation of a rigid body may be determined based on a bone having its base within the rigid body. As the bone is moved, the rigid body may move accordingly, for example by preserving a relationship between the rigid body and the bone. | 09-12-2013 |
20130235044 | MULTI-PURPOSE PROGRESS BAR - In addition to conveying a completion status of a task to a user, an improved progress bar can convey additional information about the task to the user. For example, some embodiments can present a type of animation in the progress bar conveying a rate at which the task is being completed. A different type of animation can represent performance of a task at a different rate. For example, a different animation may be displayed when a web page is loading at 5 Mb/s as opposed to when the web page is loading at 0.5 Gb/s. Further, in some embodiments, different types of animation can represent different types of tasks. | 09-12-2013 |
20130235045 | SYSTEMS AND METHODS FOR CREATING AND DISTRIBUTING MODIFIABLE ANIMATED VIDEO MESSAGES - Systems and methods in accordance with embodiments of the invention enable collaborative creation, transmission, sharing, non-linear exploration, and modification of animated video messages. One embodiment includes a video camera, a processor, a network interface, and storage containing an animated message application, and a 3D character model. In addition, the animated message application configures the processor to: capture a video sequence using the video camera; detect a human face within a sequence of video frames; track changes in human facial expression of a human face detected within a sequence of video frames; map tracked changes in human facial expression to motion data, where the motion data is generated to animate the 3D character model; apply motion data to animate the 3D character model; render an animation of the 3D character model into a file as encoded video; and transmit the encoded video to a remote device via the network interface. | 09-12-2013 |
20130249920 | IMAGE PROCESSING DEVICE - A method for processing an electronic document (ED) using a page rendering device (PRD), including: identifying, within the ED, a slide including an animation sequence of a plurality of objects; generating, based on the slide, a first frame lacking the animation sequence and including a first object of the plurality of objects; generating, based on the slide, a second frame lacking the animation sequence and including a second object of the plurality of objects; and placing, by the PRD and during a rendering the ED, the first frame on a first page. | 09-26-2013 |
20130257875 | ELECTRONIC APPARATUS, METHOD OF CONTROLLING ELECTRONIC APPARATUS, AND COMPUTER-READABLE MEDIUM - In one embodiment, there is provided an electronic apparatus. The apparatus includes: a communicator configured to communicate with an external apparatus; an acquisition module configured to receive a first synchronization data from the external apparatus and to determine a time interval of the external apparatus, after association with the external apparatus; and a display module configured to display a first image and to vary the first image according to the time interval determined by the acquisition module. | 10-03-2013 |
20130257876 | Systems and Methods for Providing An Interactive Avatar - Systems and methods are provided for a computer-implemented method of providing an interactive avatar that reacts to a communication from a communicating party. Data from an avatar characteristic table is provided to an avatar action model, where the avatar characteristic table is a data structure stored on a computer-readable medium that includes values for a plurality of avatar personality characteristics. A communication with the avatar is received from the communicating party. A next state for the avatar is determined using the avatar action model, where the avatar action model determines the next state based on the data from the avatar characteristic table, a current state for the avatar, and the communication. The next state for the avatar is implemented, and the avatar characteristic table is updated based on the communication from the communicating party, where a subsequent state for the avatar is determined based on the updated avatar characteristic table. | 10-03-2013 |
20130257877 | Systems and Methods for Generating an Interactive Avatar Model - Systems and methods are provided for generating an avatar configured to represent traits of a human subject. First interactions of the human subject are observed, and characteristics of the human subject are extracted from the observed first interactions. An avatar characteristic table is generated or updated based on the extracted personality characteristics. Second interactions of the human subject are observed, and the avatar characteristic table is updated based on the observed second interactions. | 10-03-2013 |
20130257878 | METHOD AND APPARATUS FOR ANIMATING STATUS CHANGE OF OBJECT - An apparatus and a method for animating a status change of an object on a display. The apparatus for animating a status change of an object includes a display unit, an input unit, one or more processors, a memory, and one or more modules. The display unit displays a status change process of a first object. The input unit receives a status change instruction for the first object. One or more modules are stored in the memory and executed by the one or more processors. The module displays a second object in response to the status change instruction to animate a status change of the first object. | 10-03-2013 |
20130271472 | Display of Value Changes in Between Keyframes in an Animation Using a Timeline - In one embodiment, a method includes receiving a first input used to determine a plurality of keyframes in an interface used to create an animation of an element. Each keyframe in the plurality of keyframes is associated with a value. The method then calculates, for a set of frames in between at least two keyframes in the plurality of keyframes, a value for each frame in the set of frames. The value for each frame in the set of frames is stored in a data structure. A second input associated with a time in between the at least two keyframes is received from the interface being used to create the animation. The method then determines a frame in the set of frames based on the time and the value associated with the frame from the data structure and renders the element using the value. | 10-17-2013 |
20130271473 | Creation of Properties for Spans within a Timeline for an Animation - In one embodiment, a method includes receiving an input specifying a keyframe in a first layer included in a master layer to create an animation of a first element in a plurality of elements. The first layer is associated with the first element. A master duration associated with the master layer is determined where the master duration is applied to the plurality of elements. The method determines a keyframe value for the first layer based on the master duration and a property value for the keyframe value for the first layer. Software code is generated specifying the calculated keyframe value and the determined property value, the software code for use in creating the animation of the first element. | 10-17-2013 |
20130278607 | Systems and Methods for Displaying Animations on a Mobile Device - The invention provides for systems, devices, and methods for displaying animations on devices with low-memory capacity or low-processing power, such as a mobile device. Animation sequences can by created using scene graphs of nodes. Nodes can be embedded nodes, collection nodes, or image nodes. Embedded nodes can be an embedded scene graph, a collection node can be a collection of nodes that reference collection of image sets, and an image node can be a reference to an image file and an affine transformation. Image sequences can be used using affine transformations. The affine transformation matrices can then be exported to an animation data file. Inclusion of affine transformation matrices with animation data files can reduce the memory required to store multiple image files and can reduce the computation power required to display animations. The systems, devices, and methods for displaying animations can allow for a high degree of creative freedom while reducing memory and processing requirements on a client device. | 10-24-2013 |
20130278608 | Plant Simulation for Graphics Engines - Plants may be visualized using an inertial animation model with Featherstone algorithm. Different levels of detail may be used for different blocks of plants. | 10-24-2013 |
20130286025 | EXTENSIBLE SPRITE SHEET GENERATION MECHANISM FOR DECLARATIVE DATA FORMATS AND ANIMATION SEQUENCE FORMATS - A sprite sheet generation mechanism includes providing a sprite sheet generation engine host, which may be an authoring application. The host loads code that describes sprite sheet format information and a set of ordered images into the sprite sheet generation engine. The code is from code resources may be plug-ins created by a user and managed by a plug-in type manager. The sprite sheet generation engine is operated using the sprite sheet format information and the set of ordered images to generate a sprite sheet. | 10-31-2013 |
20130293555 | ANIMATION VIA PIN THAT DEFINES MULTIPLE KEY FRAMES - Disclosed are embodiments for defining animation of content. One exemplary embodiment calls for receiving an indication of a location for an animation pin on a timeline associated with a content editing environment configured for editing content. The embodiment involves recording a state of the content in response to receiving the indication of the location for the animation pin, the recorded state of the content associated with a first time and comprising a value associated with a property. Additionally, the embodiment involves receiving a user input indicating an edited state of the content at a second time different from the first time, the second state associated with the location of the animation pin on the timeline and defining an animation based at least in part on the recorded state and the edited state of the content. | 11-07-2013 |
20130300749 | IMAGE GENERATING DEVICE, IMAGE GENERATING METHOD, AND INFORMATION STORAGE MEDIUM - Provided is an information storage medium having stored thereon a program for causing a computer to execute processing for: acquiring a tentative time interval between a frame for generating an image and a previous frame; acquiring, when a condition associated with action data indicating a posture of an object in accordance with time is satisfied, the posture of the object at a time point at which a time interval shorter than the tentative time interval has elapsed since the previous frame based on the action data; and rendering the image indicating the acquired posture of the object. | 11-14-2013 |
20130307855 | HOLOGRAPHIC STORY TELLING - A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. | 11-21-2013 |
20130307856 | SYNCHRONIZING VIRTUAL ACTOR'S PERFORMANCES TO A SPEAKER'S VOICE - A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids. | 11-21-2013 |
20130321430 | Systems and Methods for Providing and Using Animations - Certain embodiments relate to combining or blending animations that are attempting to simultaneously animate the same target. Certain embodiments simplify the blending of animations in the application development environment. For example, certain embodiments allow animations to be used or specified by a developer without the developer having to specifically address the potential for time-overlapping animations. As a few specific examples, an application may specify animations by simply calling a function to change a property of a target or by sending a command to change a public property of the target. Certain embodiments provide a blender that intercepts such function calls and commands. If two animations require a change to the same target at the same time, the blender determines an appropriate blended result and sends an appropriate function call or command to the target. The function calls and commands need not be aware of the blender. | 12-05-2013 |
20130321431 | METHOD, SYSTEM AND APPARATUS FOR PROVIDING A THREE-DIMENSIONAL TRANSITION ANIMATION FOR A MAP VIEW CHANGE - Methods, systems and apparatus are described to provide a three-dimensional transition for a map view change. Various embodiments may display a map view. Embodiments may obtain input selecting another map view for display. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. In response to the input selecting a map view, embodiments may then display a transition animation that illustrates moving from the displayed map view to the selected map view in virtual space. Embodiments may then display the selected map view. | 12-05-2013 |
20130328887 | METHODS AND SYSTEMS FOR HOSTING A PORTION OF A USER INTERFACE AND SYNCHRONIZING ANIMATION BETWEEN PROCESSES - Methods and systems for hosting a portion of a user interface and synchronizing animations between processes are described herein. In one embodiment, a method includes receiving with a first service at least one request for animation from a first process, transferring the at least one request for animation from the first service to a second service associated with a second process, and synchronizing the animation in the multiple views of the multiple processes. | 12-12-2013 |
20130328888 | SYSTEMS AND METHODS FOR ANIMATING BETWEEN COLLECTION VIEWS - Techniques, systems, and methods for allowing a user to select amongst different collection views and to animate the transition from one collection view to another. To select a different collection view, the user may provide a certain gesture on the display screen which causes the items displayed in the current collection view to transition in an animated fashion to a new collection view selected by the particular gesture. The particular type of animation between different collection views depends upon the manner in which the items in each collection view are defined in their respective layouts and the manner that facilitates a relatively seamless transition of the items from one layout to another layout. | 12-12-2013 |
20130335425 | Systems and Methods for Combining Animations - Systems and methods are disclosed to facilitate a user's ability to create animations. Improved software applications, for example, can facilitate a user's ability to create complex animations by allowing the user to separately declare multiple simple animations, such as individual from/to animations, and then combine these individual animations into a complex animation, such as into multi-part, key frame-based animation. Multiple simple animations can be combined into a single multi-part animation that defines time/value pairs (e.g., keyframes) specifying the values of one or more properties at certain times. Using animations defined in terms of several time/value pairs such as key frames can facilitates the creation of a single animation that describes a motion path that an object will take through several intermediate values between its begin and end points and that involves multiple properties that are changing. | 12-19-2013 |
20130335426 | TEMPORAL NOISE CONTROL FOR SKETCHY ANIMATION - Techniques are presented for controlling the amount of temporal noise in certain animation sequences. Sketchy animation sequences are received in an input in a digital form and used to create an altered version of the same animation with temporal coherence enforced down to the stroke level, resulting in a reduction of the perceived noise. The amount of reduction is variable and can be controlled via a single parameter to achieve a desired artistic effect. | 12-19-2013 |
20130335427 | System and Method for Generating Dynamic Display Ad - In this disclosure, a method and system are disclosed executed on a communication device. The method and system are configured to request for a dynamic display ad from a dynamic display ad system; receive an animation sequence from the dynamic display ad system, the animation sequence comprising one or more instructions to show one or more interest areas on a selected digital flyer; and display the dynamic display ad using the animation sequence, wherein the animation sequence is used to render the dynamic display ad using flyer content of the selected digital flyer. | 12-19-2013 |
20130342544 | INDICATING THE PROGRESS OF A BOOT SEQUENCE ON A COMMUNICATION DEVICE - Disclosed is a method of indicating the progress of a boot sequence on a communication device, the method comprising initiating the boot sequence; receiving input at the communication device during the boot sequence; and, in response to receiving input, outputting a progress indicator indicating the progress of the boot sequence. | 12-26-2013 |
20140002462 | METHOD AND MOBILE TERMINAL FOR DYNAMIC DISPLAY OF AN EMOTION | 01-02-2014 |
20140002463 | SKIN AND FLESH SIMULATION USING FINITE ELEMENTS, BIPHASIC MATERIALS, AND REST STATE RETARGETING | 01-02-2014 |
20140009475 | ANIMATION IN THREADED CONVERSATIONS - A method for augmenting a threaded conversation between a first device and a second device. The method includes: receiving a selection of a selectable animation, via a selection of a selectable animation representation, at the first device, wherein the selectable animation is configured for augmenting the threaded conversation; and incorporating the selection of the selectable animation into the threaded conversation such that the selectable animation appears in a conversation view of the first device and the second device. | 01-09-2014 |
20140015839 | DEVICE AND METHOD FOR EXPRESSING STATUS OF TERMINAL USING CHARACTER - A status expression system and method operating the same are provided for presenting a state of the mobile phone by means of a character agent. A status expression system of the present invention includes a memory for storing a plurality of character quotients, information on at least one state transition model, and resources for presenting the character; a character controller for updating the character quotients according to events occurred in the mobile phone, determining a state by analyzing the character quotients and referring to the state transition model, and formatting the state using the resources assigned for the state of the character; and a display for presenting the character with the resources in the state. | 01-16-2014 |
20140022261 | SYSTEM AND METHOD TO ACHIEVE BETTER EYELINES IN CG CHARACTERS - Systems and methods are provided to create better-looking animated eyes for CG characters. The systems and methods set the rigging of each eye to, rather than precisely converge on a target location, converge but be rotationally or angularly offset by a certain amount to simulate correct physical eye positioning and movements. In addition, the systems and methods provide even more realistic eye appearance by taking account of the refractive properties of the cornea, e.g., which can make the pupil appear larger than it actually is. The systems and methods may further take account of a shadowing effect of the upper eye caused by the brow, eyelashes, and upper lid (as well as an effect caused by reflection from the underside of the eyelashes). This darkening of the upper portion of the eye addresses vertical eyeline discrepancies caused by the visual and optical illusion of incorrect lighting. | 01-23-2014 |
20140028685 | GENERATING CUSTOMIZED EFFECTS FOR IMAGE PRESENTATION - A method to receive an image to be displayed within an animation sequence. The method also including calculating position data identifying a position of the image within a display area, the position data calculated using a physics property attributed to the image. Further, the method including the transmission of the position data for use in generating the animation sequence. Additionally, a method is provided that includes making a request for an animation sequence that includes an image and position data for the image, the position data identifying a plurality of positions relative to a display area and calculated though applying a physics property to the image. This method further includes receiving the animation sequence for display in a display area. The method additionally includes displaying the image in the display area based upon the position data. | 01-30-2014 |
20140035929 | CONTENT RETARGETING USING FACIAL LAYERS - Techniques are disclosed for retargeting facial expressions. Input is received that represents a facial expression of a first character. Facial layers are generated based on the received input. The facial layers include one or more parameters extracted from the received input. A facial expression for a second character and corresponding to the facial expression of the first character is generated, based on the facial layers and without defining any spatial correspondence between the first character and the second character. | 02-06-2014 |
20140035930 | CHARACTER DISPLAY DEVICE - When a predetermined condition is satisfied in a game and a scenario mode is started, a predetermined scenario is displayed. When an animation in the scenario is completely displayed, a transmission waiting state is started and idling of a character displayed on a display screen is displayed. The idling display is performed based on the pose of the character in the transmission waiting state. The idling display is performed by deciding the angle of a target pose based on difference information of a joint management table. | 02-06-2014 |
20140035931 | TEMPORAL DEPENDENCIES IN DEPENDENCY GRAPHS - Systems and processes are described below relating to evaluating a dependency graph having one or more temporally dependent variables. The temporally dependent variables may include variables that may be used to evaluate the dependency graph at a frame other than that at which the temporally dependent variable was evaluated. One example process may include tracking the temporal dirty state for each temporally dependent variable using a temporal dependency list. This list may be used to determine which frames, if any, should be reevaluated when a request to evaluate a dependency graph for a particular frame is received. This advantageously reduces the amount of time and computing resources needed to reevaluate a dependency graph. | 02-06-2014 |
20140035932 | SYSTEM AND METHOD FOR CONTROLLING ANIMATION BY TAGGING OBJECTS WITHIN A GAME ENVIRONMENT - A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest. | 02-06-2014 |
20140043340 | Animation Transitions and Effects in a Spreadsheet Application - Concepts and technologies are described herein for animation transitions and effects in a spreadsheet application. In accordance with the concepts and technologies disclosed herein, a computer system can execute a visualization component. The computer system can detect selection of a scene included in a visualization of spreadsheet data. The computer system also can generate an effect for the scene selected. In some embodiments, the computer system identifies another scene and generates a transition between the scenes. The computer system can output the effect animation and the transition animation. | 02-13-2014 |
20140049547 | Methods and Systems for Representing Complex Animation using Style Capabilities of Rendering Applications - A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package representing the animation sequence as a set of visual assets and animation primitives supported by a rendering application, each visual asset associated with a corresponding animation primitive. The code package is generated to include suitable code that, when processed by the rendering application, causes the rendering application to invoke the corresponding animation primitive for each visual asset to provide the animation sequence. For example, the rendering application can comprise a browser that renders the visual assets. The code package can comprise a markup document including or referencing a cascading style sheet defining the corresponding animation primitives as styles to be applied to the visual assets when rendered by the browser. | 02-20-2014 |
20140055463 | SYSTEM AND METHOD FOR GENERATING COMPUTER RENDERED CLOTH - A system, method and computer software program on a computer readable medium for loading cloth modeling data, generating an environmental model, generating a basic cloth model, and generating sections of a cloth surface model based on the basic cloth model and the cloth modeling data. The sections of the cloth surface model may be partial geometric forms, a portion of a ball and stick model or a non-uniform rational basis spline, and may be joined together and smoothed to form a complex cloth model. The smoothed cloth model may include a series of waves or folds in a computer rendered cloth surface to represent draped or compressed cloth on a three dimensional surface. | 02-27-2014 |
20140063021 | ANIMATIONS - At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer. | 03-06-2014 |
20140071137 | IMAGE ENHANCEMENT APPARATUS - A method comprising: analysing at least two images to determine at least one object mutual to the at least two images, the object having a periodicity of motion; generating an animated image based on the at least two images, wherein the at least one object is animated; determining at least one audio signal associated with the at least one object; and combining the at least one audio signal with the animated image to generate an audio enabled animated image. | 03-13-2014 |
20140078153 | Embedding Animation in Electronic Mail, Text Messages and Websites - Provided are techniques for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera. The first photograph is called the “naked photo.” Using a graphics program, photos subsequent to the naked photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the naked photo as layers. The modified naked photo, including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the naked photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible. | 03-20-2014 |
20140085312 | SEAMLESS FRACTURE IN A PRODUCTION PIPELINE - Systems and processes for rendering fractures in an object are provided. In one example, a surface representation of an object may be converted into a volumetric representation of the object. The volumetric representation of the object may be divided into volumetric representations of two or more fragments. The volumetric representations of the two or more fragments may be converted into surface representations of the two or more fragments. Additional information associated with attributes of adjacent fragments may be used to convert the volumetric representations of the two or more fragments into surface representations of the two or more fragments. The surface representations of the two or more fragments may be displayed. | 03-27-2014 |
20140085313 | SYSTEMS AND METHODS FOR ANIMATING NON-HUMANOID CHARACTERS WITH HUMAN MOTION DATA - Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described. | 03-27-2014 |
20140085314 | METHOD FOR TRANSMITTING DIGITAL SCENE DESCRIPTION DATA AND TRANSMITTER AND RECEIVER SCENE PROCESSING DEVICE - A method for transmitting digital scene description data from a transmitter scene processing device to at least one receiver scene processing device is disclosed. The method comprises the steps of encoding of scene description data and rendering commands in the transmitter scene processing device by setting of priorities for the scene description data and related rendering commands and dynamically reordering the scene description data and related rendering commands depending on the respective priorities in order to reduce the bandwidth required for transmission and/or to adapt to unreliable bandwidth; and transmitting the encoded scene description data and related rendering commands to the at least one receiver scene processing device for decoding and executing the rendering commands in relation to the transmitted scene description data by the at least one receiver scene processing device to achieve animated digital graphic. | 03-27-2014 |
20140092099 | TRANSITIONING PERIPHERAL NOTIFICATIONS TO PRESENTATION OF INFORMATION - Methods, apparatuses, and computer program products are herein provided for transitioning from presentation of a peripheral notification on a first display to presentation of information on a second display. A method may include causing presentation of a peripheral notification on a first display. The method may further include receiving an indication of a selection associated with the peripheral notification. The method may further include causing presentation of a transition animation in response to receiving the indication. The transition animation indicates a transition from the presentation of the peripheral notification to presentation of information associated with the peripheral notification. The method may further include causing presentation of the information on a second display. Corresponding apparatuses and computer program products are also provided. | 04-03-2014 |
20140092100 | Dial Menu - A method for providing a user interface includes displaying a first screen with a dial menu. The dial menu is shown as an arch divided into sections that hold first menu options. In response to detecting a gesture to rotate the dial menu, the method includes displaying an animated rotation of the dial menu where at least one first menu option is rotated off the first screen and, when the number of the first menu options exceeds the number of the sections, at least another first menu option is rotated into the first screen. In response to detecting a selection of a first menu option, the method includes displaying a second screen having the dial menu now with the sections holding second menu options different from the first menu options. | 04-03-2014 |
20140092101 | APPARATUS AND METHOD FOR PRODUCING ANIMATED EMOTICON - An apparatus and method for producing an animated emoticon are provided. The method includes producing a plurality of frames that constitute the animated emoticon; inputting at least one object for each of the plurality of frames; producing object information for the input object; and producing structured animated emoticon data that include each of the plurality frames and the object information. | 04-03-2014 |
20140098106 | APPLICATION PROGRAMMING INTERFACES FOR SYNCHRONIZATION - The application programming interface operates in an environment with user interface software interacting with multiple software applications or processes in order to synchronize animations associated with multiple views or windows of a display of a device. The method for synchronizing the animations includes setting attributes of views independently with each view being associated with a process. The method further includes transferring a synchronization call to synchronize animations for the multiple views of the display. In one embodiment, the synchronization call includes the identification and the number of processes that are requesting animation. The method further includes transferring a synchronization confirmation message when a synchronization flag is enabled. The method further includes updating the attributes of the views from a first state to a second state independently. The method further includes transferring a start animation call to draw the requested animations when both processes have updated attributes. | 04-10-2014 |
20140098107 | Transitioning Between Top-Down Maps and Local Navigation of Reconstructed 3-D Scenes - Technologies are described herein for transitioning between a top-down map display of a reconstructed structure within a 3-D scene and an associated local-navigation display. An application transitions between the top-down map display and the local-navigation display by animating a view in a display window over a period of time while interpolating camera parameters from values representing a starting camera view to values representing an ending camera view. In one embodiment, the starting camera view is the top-down map display view and the ending camera view is the camera view associated with a target photograph. In another embodiment, the starting camera view is the camera view associated with a currently-viewed photograph in the local-navigation display and the ending camera view is the top-down map display. | 04-10-2014 |
20140104282 | SYSTEM AND METHOD FOR ANIMATING A BODY - Systems and methods are disclosed for applying a controllable and predictable muscle oscillation to a portion of a body such as a character in a predetermined and intuitive way. Positions of separate muscle locations are analyzed throughout a timeline. A third derivative with respect to time of the positions of the separate muscles is calculated to measure a change of acceleration for a particular point of interest. The change in the acceleration gives positive or negative changes in the applied forces. The direction and magnitude of the applied forces is passed onto a part of a solution that creates a procedural oscillation based on the magnitude and the direction of the applied forces in the muscle space. | 04-17-2014 |
20140111523 | VARIABLE LENGTH ANIMATIONS BASED ON USER INPUTS - A computer-implemented method for transition animation is disclosed according to one aspect of the subject technology. The method comprises determining a user's level of experience using an application, determining a duration of a transition animation based on the user's determined level of experience, and playing the transition animation for the determined duration. | 04-24-2014 |
20140111524 | METHOD AND TERMINAL FOR DISPLAYING AN ANIMATION - A method for a terminal to display an animation, including: generating one or more supplementary image frames on a moving path between first and second adjacent original image frames of an animation; and displaying the animation with the generated one or more supplementary image frames at a predetermined frame rate. | 04-24-2014 |
20140118357 | Virtual Reality Display System - A method and apparatus for displaying a virtual environment. First eye position information for a first eye in a head of a person and second eye position information for a second eye in the head of the person is received. A first image of the virtual environment for the first eye is generated based on the first eye position information. A second image of the virtual environment for the second eye is generated based on the second eye position information for the second eye. The first image and the second image for display are sent to the person. | 05-01-2014 |
20140118358 | COMPUTER SYSTEM AND ASSEMBLY ANIMATION GENERATION METHOD - Provided is a technology for automatically generating a camera pose enabling the viewing of an operation of an object component in a work instruction animation. A primary inertia axis of an assembled item is calculated from inertia tensor information of a plurality of components constituting the assembled item. Adjacency relationship information indicating an adjacency relationship between the plurality of components is acquired. Based on the adjacency relationship information of the plurality of components, an assembly sequence and an assembly motion vector indicating an assembled direction of the plurality of components are generated such that each of the plurality of components does not interfere with a proximate component. Further, a plurality of camera eye sights each having a camera axis about the primary inertia axis and providing an operator's vision candidate during the generation of the assembly animation is arranged. | 05-01-2014 |
20140118359 | Control for Digital Lighting - A digitally controlled lighting system where aspects have a central media server connected to remote media servers. The connection may have separate networks for control versus media. Automatic synchronization of the contents of the media servers may be carried out. | 05-01-2014 |
20140125678 | Virtual Companion - The virtual companion described herein is able to respond realistically to tactile input, and through the use of a plurality of live human staff, is able to converse with true intelligence with whomever it interacts with. The exemplary application is to keep older adults company and improve mental health through companionship. | 05-08-2014 |
20140139531 | INTERACTIVE SCOREKEEPING AND ANIMATION GENERATION - Techniques for interactive scorekeeping and animation generation are described, including evaluating a play to form an event datum associated with execution of the play, using the event datum to form an event packet, generating an animation using the event packet, the animation being associated with the execution of the play, and presenting the animation on an endpoint. | 05-22-2014 |
20140152672 | Rotatable Animation Devices with Staggered Illumination Sources - An illuminated animation device with staggered sources of illumination with a rotatable member rotatable about an axis of rotation, first and second pluralities of sources of illumination retained to rotate with the rotatable member that are actuatable between illuminated and non-illuminated conditions. The first and second pluralities of sources of illumination are staggered so that the sources of illumination will produce individual paths of illumination to permit image display with enhanced. The rotatable member can be a rotatable panel with first and second arrays retained relative to first and second halves thereof, and the sources of illumination can be longitudinally and laterally staggered, such as by one-half a distance between adjacent sources of illumination. The sources of animation can alternatively be disposed in opposed, radially spaced straight line arrays. The device can be handheld and can include a motor and a power source. | 06-05-2014 |
20140152673 | Patient Monitor for Generating Real-Time Relational Animations of Human Organs in Response to Physiologic Signals - A medical alarm system for processing medical time-series data of multiple physiologic signals in hospitals and other environments is disclosed. The alarm system generates physiologic animations in real-time. The physiologic animations are shaped as a schematic of the physiologic system being monitored. The physiologic system is comprised of multiple components corresponding to organs. The physiologic animation and organs move over time in response to the physiologic signals. | 06-05-2014 |
20140160134 | Visualization of a natural language text - This invention is related to visualization of a natural language text, namely, to conversion of such text into a corresponding image, animation, or a three-dimensional scene. The proposed solution provides for a tool set for visualization of such text and automatically obtaining an animated three-dimensional scene. The invention contemplates a method of text visualization comprising the steps of obtaining a natural language text, conducting an automatic semantic breakdown of the language text (parsing) with the purpose of obtaining a structured semantic net, creating a three-dimensional scene on the basis of semantic breakdown results, creating a video clip or a set of pictures using the obtained three-dimensional scenes, and visualization of the obtained video clip or set of images. The invention provides for simultaneous producing video clips by several users according to different scenarios, manual editing a three-dimensional scene and semantic breakdown results, replenishing a library by own content, etc. | 06-12-2014 |
20140176565 | COMPUTER IMPLEMENTED METHODS AND SYSTEMS FOR GENERATING VIRTUAL BODY MODELS FOR GARMENT FIT VISUALISATION - Methods for generating and sharing a virtual body model of a person, created with a small number of measurements and a single photograph, combined with one or more images of garments. The model represents a realistic representation of the users body and is used for visualizing photo-realistic fit visualizations of garments, hairstyles, etc. The virtual garments are created from layers based on photographs of real garment from multiple angles. The virtual body model is used in multiple embodiments of manual and automatic garment, make-up, and, hairstyle recommendations, such as, from channels, friends, and fashion entities, and sharable for visualization and comments on looks, for enabling users to buy garments that fit other users, suitable for gifts or similar, and can be used in peer-to-peer online sales where garments can be bought with the knowledge that the seller has a similar body shape and size as the user. | 06-26-2014 |
20140176566 | APPARATUS FOR SIMULTANEOUSLY STORING AREA SELECTED IN IMAGE AND APPARATUS FOR CREATING AN IMAGE FILE BY AUTOMATICALLY RECORDING IMAGE INFORMATION - An apparatus to collectively store areas selected in an image includes an image-editing unit to load a standard image file, to display a standard image based on the standard image file, and to enable a user to edit the standard image, a zooming unit to zoom into and away from a position where a marker of an input unit is indicating on the standard image, and a selected-image-managing unit to collectively store one or more areas selected by the input unit as one or more corresponding image files. | 06-26-2014 |
20140176567 | PRIORITIZED RENDERING OF OBJECTS IN A VIRTUAL UNIVERSE - Approaches for prioritized rendering of objects in a virtual universe are provided. In one embodiment, there is a prioritization tool containing a plurality of components configured to: determine a priority of each of a set of objects in a commercial area of the virtual universe, the commercial area having a plurality of virtual retail stores; assign a priority to each of the plurality of virtual stores in the commercial area based on the priority of each of the set of objects in the virtual universe; and download and cache each of the objects from the set of virtual stores from the plurality of virtual stores in the virtual universe, that are outside a rendering radius of the avatar, based on the relative priorities of each of the set of the plurality of virtual stores in the virtual universe. | 06-26-2014 |
20140192059 | CONTROLLING ANIMATED CHARACTER EXPRESSION - A system includes a computer system capable of representing one or more animated characters. The computer system includes a blendshape manager that combines multiple blendshapes to produce the animated character. The computer system also includes an expression manager to respectively adjust one or more control parameters associated with each of the plurality of blendshapes for adjusting an expression of the animated character. The computer system also includes a corrective element manager that applies one or more corrective elements to the combined blendshapes based upon at least one of the control parameters. The one or more applied corrective elements are adjustable based upon one or more of the control parameters absent the introduction of one or more additional control parameters. | 07-10-2014 |
20140198106 | Rig-Based Physics Simulation - A method is disclosed for applying physics-based simulation to an animator provided rig. The disclosure presents equations of motions for simulations performed in the subspace of deformations defined by an animator's rig. The method receives an input rig with a plurality of deformation parameters, and the dynamics of the character are simulated in the subspace of deformations described by the character's rig. An artist's control of the simulation can be enhanced by providing a method that transforms stiffness values defined on rig parameters to a non-homogeneous distribution of material parameters for the underlying rig. | 07-17-2014 |
20140198107 | FAST RIG-BASED PHYSICS SIMULATION - A method is disclosed for applying physics-based simulation to an animator provided rig. The disclosure presents equations of motions for simulations performed in the subspace of deformations defined by an animator's rig. The method receives an input rig with a plurality of deformation parameters, and the dynamics of the character are simulated in the subspace of deformations described by the character's rig. An artist's control of the simulation can be enhanced by providing a method that transforms stiffness values defined on rig parameters to a non-homogeneous distribution of material parameters for the underlying rig. | 07-17-2014 |
20140210830 | COMPUTER GENERATED HEAD - A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head,
| 07-31-2014 |
20140218370 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR GENERATION OF ANIMATED IMAGE ASSOCIATED WITH MULTIMEDIA CONTENT - In accordance with an example embodiment a method, apparatus and computer program product are provided. The method comprises facilitating selection of at least one object from a plurality of objects in a multimedia content. The method also comprises accessing an object mobility content associated with the at least one object. The object mobility content is indicative of motion of the plurality of objects in the multimedia content. An animated image associated with the multimedia content is generated based on the selection of the at least one object and the object mobility content associated with the at least one object. | 08-07-2014 |
20140218371 | FACIAL MOVEMENT BASED AVATAR ANIMATION - Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently. | 08-07-2014 |
20140218372 | INTELLIGENT DIGITAL ASSISTANT IN A DESKTOP ENVIRONMENT - Methods and systems related to interfaces for interacting with a digital assistant in a desktop environment are disclosed. In some embodiments, a digital assistant is invoked on a user device by a gesture following a predetermined motion pattern on a touch-sensitive surface of the user device. In some embodiments, a user device selectively invokes a dictation mode or a command mode to process a speech input depending on whether an input focus of the user device is within a text input area displayed on the user device. In some embodiments, a digital assistant performs various operations in response to one or more objects being dragged and dropped onto an iconic representation of the digital assistant displayed on a graphical user interface. In some embodiments, a digital assistant is invoked to cooperate with the user to complete a task that the user has already started on a user device. | 08-07-2014 |
20140218373 | SCRIPT CONTROL FOR CAMERA POSITIONING IN A SCENE GENERATED BY A COMPUTER RENDERING ENGINE - A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech. | 08-07-2014 |
20140240324 | TRAINING SYSTEM AND METHODS FOR DYNAMICALLY INJECTING EXPRESSION INFORMATION INTO AN ANIMATED FACIAL MESH - A system and method for modifying facial animations to include expression and microexpression information is disclosed. Particularly, a system and method for applying actor-generated expression data to a facial animation, either in realtime or in storage is disclosed. Present embodiments may also be incorporated into a larger training program, designed to train users to recognize various expressions and microexpressions. | 08-28-2014 |
20140253560 | Editing Animated Objects in Video - In one aspect, in general, a method includes receiving, in a user interface of a video editing application executing on a computer system, an indication from a user of the video editing application to edit an animated object associated with a video clip displayed in the user interface, receiving, by the video editing application executing on the computer system, data specifying an editing location of the animated object, and determining, by the video editing application executing on the computer system, a frame of the video clip associated with the editing location of the animated object, the determination based on the data specifying the editing location of the animated object. Other aspects may include corresponding systems, apparatus, and computer program products. | 09-11-2014 |
20140253561 | METHOD FOR PROCESSING A COMPUTER-ANIMATED SCENE AND CORRESPONDING DEVICE - The invention is related to a method for processing a computer-animated scene, the computer-animated scene being represented with at least an animation graph, at least an animation graph comprising a plurality of nodes connected by paths, the paths being representative of dependencies between the nodes, at least an event being associated with each node, a first information representative of the type of each event being associated with each node. As to optimize the parallelization of the nodes, the method comprises a step of classifying the nodes in at least a first batch and at least a second batch according to the first information associated with each node, at least a first batch comprising nodes to be evaluated in parallel and at least a second batch comprising nodes to be evaluated sequentially. | 09-11-2014 |
20140267303 | ANIMATION - An animation system may allow users to create animations by accessing a matrix of possible animation images, in which images may be arranged to be adjacent to one another based on similarity, and allowing a user to select one or more images. The system may then determine an animation path through the matrix or group of images. A display system can retrieve and display images along the path to generate an animation. | 09-18-2014 |
20140267304 | METHODS, APPARATUS AND SYSTEM FOR ANALYTICS REPLAY UTILIZING RANDOM SAMPLING - Methods, systems, and computer program products for visually representing and displaying data are described. The visual representation may be a data animation. A data query may be submitted, a time measurement for processing the query may be obtained, and a sample size of the query may be adjusted based on the time measurement and a frame refresh rate of a data animation. A data animation may be generated based on one or more results of the query. | 09-18-2014 |
20140267305 | METHOD AND SYSTEM FOR PRESENTING EDUCATIONAL MATERIAL - Method and system for presenting a lesson plan having a plurality of keyframes is provided. The method includes initializing a fixed variable for a first keyframe; detecting an input from a user at an interactive display device; correlating the input with a gesture from a gesture set and one or more of database domains from among a plurality of database domains; displaying an image at the interactive display device when all mutable variables for the first keyframe content are determined based on the input; manipulating the image using a gesture associated with the displayed image and any associated database domains; and transitioning to a next keyframe using animation. | 09-18-2014 |
20140267306 | CONTENT AWARE TEXTURE MAPPING ON DEFORMABLE SURFACES - A method is disclosed for reducing distortions introduced by deformation of a surface with an existing parameterization. In one embodiment, the distortions are reduced over a user-specified convex region in texture space ensuring optimization is locally contained in areas of interest. A distortion minimization algorithm is presented that is guided by a user-supplied rigidity map of the specified region. In one embodiment, non-linear optimization is used to calculate the axis-aligned deformation of a non-uniform grid specified over the region's parameter space, so that when the space is remapped from the original to the deformed grid, the distortion of the rigid features is minimized. Since grids require minimal storage and the remapping from one grid to another entails minimal cost, grids can be precalculated for animation sequences and used for real-time texture space remapping that minimizes distortions on specified rigid features. | 09-18-2014 |
20140267307 | METHOD AND SYSTEM FOR VIEWING OF COMPUTER ANIMATION - Computer animation tools for viewing, in multiple contexts, the effect of changes to a computer animation are disclosed. An artist configures multiple visual displays in the user interface of a computer animation system. A visual display shows one or more frames of computer animation. An artist configures a visual display to reflect a specific context. For example, the artist may assign a particular virtual viewpoint of a scene to a particular visual display. Once visual displays are configured, the artist changes a configuration of the computer animation. For example, the artist may change the lighting parameters of a scene. In response, the visual displays show the visual effects of the configuration (e.g., lighting parameters) change under corresponding contexts (e.g., different virtual camera viewpoints). Using multiple visual displays, which may be displayed side-by-side, an artist can view the effects of her configuration changes in the various contexts. | 09-18-2014 |
20140267308 | ARBITRARY HIERARCHICAL TAGGING OF COMPUTER-GENERATED ANIMATION ASSETS - Systems and methods for using hierarchical tags to create a computer-generated animation are provided. The hierarchical tags may be used to organize, identify, and select animation assets in order to configure animation parameters used to render a computer-generated image. The hierarchical tags may be used to display representations of animation assets for selection. A hierarchy based on the hierarchical tags may be represented by a tree structure. The hierarchical tags may be used as part of a rule to partition animation assets. In this way, the hierarchical tags may advantageously be used to identify, organize, and select animation assets and perform animation processes. | 09-18-2014 |
20140267309 | RENDER SETUP GRAPH - Systems and methods for rendering an image using a render setup graph are provided. The render setup graph may be used to configure and manage lighting configuration data as well as external processes used to render the computer-generated image. The render setup graph may include a dependency graph having nodes interconnected by edges along which objects and object configuration data may be passed between nodes. The nodes may be used to provide a source of objects and object configuration data, configure visual effects of an object, partition a set of objects, call external processes, perform data routing functions within the graph, and the like. In this way, the render setup graph may advantageously be used to organize configuration data and execution of processes for rendering an image. | 09-18-2014 |
20140267310 | Coloring Kit For Capturing And Animating Two-Dimensional Colored Creation - A digital template animation kit is provided for generating a three-dimensional animation corresponding to a captured two-dimensional template. In embodiments, a template animation kit includes a template portfolio having template designs for coloring in by a user. A computing device executing a template animation kit application, such as a digital fashion portfolio kit application, may then capture an image of the user-completed drawings on each template design. In some embodiments, capturing an image of a template design includes identifying a coloring figure identifier, an upper page guide identifier, and a lower page guide identifier. In further embodiments, the captured images of two-dimensional template designs are applied to three dimensional digital templates for animation within a digital template animation environment. | 09-18-2014 |
20140267311 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 09-18-2014 |
20140292768 | TECHNIQUES FOR DISPLAYING AN ANIMATED CALLING CARD - According to various exemplary embodiments, a communication request from a caller is received at a mobile device associated with a callee. A relationship between the caller and the callee is determined. Animation rule information is accessed, where the animation rule information describes a plurality of animation rules corresponding to a plurality of relationships. Thereafter, a display of an animation is generated via a user interface in the mobile device of the callee, based on a specific animation rule in the animation rule information that corresponds to the relationship between the caller and the callee. | 10-02-2014 |
20140292769 | Method, Apparatus and Computer Program Product for Generating Animated Images - In accordance with an example embodiment a method, apparatus and computer program product are provided. The method comprises facilitating a selection of a portion in at least one multimedia frame among a plurality of multimedia frames having a first resolution. The plurality of multimedia frames are partitioned into static layers and dynamic layers based on the selection of the portion. An animated image effect is configured from the dynamic layers of the plurality of multimedia frames. The method further includes performing generation of at least one dynamic frame having a second resolution from the dynamic layers, and, at least one static frame having the second resolution from the static layers. The second resolution is configured to be greater than the first resolution. An animated image having the second resolution is generated based on the at least one dynamic frame and the at least one static frame. | 10-02-2014 |
20140300610 | SYSTEM AND METHOD FOR USING TIME RE-MAPPING FOR COMPUTER-GENERATED ANIMATION - Time re-mapping is used for computer-generated animation. A time re-mapper re-maps at least a portion of an animation's reference timeline to produce a different timeline. In this way the, the re-mapping may enable a re-mapped timeline to be generated that is non-linear. The re-mapped timeline may then be employed by animation generation logic that generates some effect in the animation as a function of time. For instance, the re-mapped time may be used (e.g., in place of the animation's reference timeline) by tweening logic for generating tween frames in the animation. Thus, the re-mapping may be employed to effectively produce such animation effects as easing for the generated tween frames. | 10-09-2014 |
20140300611 | WEB AND NATIVE CODE ENVIRONMENT MODULAR PLAYER AND MODULAR RENDERING SYSTEM - A system can include a modular player configured to receive input data, analyze the input data, and provide first output data. The system can further include a scene graph module configured to receive the first output data from the modular player, allocate a hierarchy structure based on the first output data, and provide second output data. The system can further include a modular renderer configured to receive the second output data and provide third output data as a visual representation. | 10-09-2014 |
20140300612 | METHODS FOR AVATAR CONFIGURATION AND REALIZATION, CLIENT TERMINAL, SERVER, AND SYSTEM - It is provided a method for avatar configuration, a method for avatar realization, a client terminal, a server, and a system for avatar management. The method for avatar configuration may include: outputting, at a client terminal, an avatar model for the user to configure when the client terminal receives a request from the user to configure an avatar; obtaining, at the client terminal, configuration data of the avatar model, the configuration data comprising bone movement data and decoration data; and performing, at the client terminal, an encoding process on the configuration data, and forming avatar data of the user. The way of configuring an avatar can be extended, and the avatar can be customized. Therefore, the representation of the avatar can meet actual requirements of the user to exactly represent the image that the user wants to show. | 10-09-2014 |
20140306965 | MULTI-DIRECTIONAL VIRTUAL PAGE NAVIGATION ANIMATION - Architecture that enables a page/content animation user experience where new pages and/or new content can be exposed in any one of multiple directions using an animated unfolding behavior. The unfolding behavior can be performed on a page or piece of content, between pages or pieces of content, on a diagonal, and other directions, and can be performed based on a fold axis that is visually presented as being a forward position or a rear position along with corresponding visual effects. One or more pages of content are presented in a viewport of display, and an interaction gesture is detected relative to the one or more pages. An animation component animates the unfolding behavior at an unfolding location relative to the one or more pages in the viewport to expose new pages. The unfolding behavior is enabled in response to the interaction gesture. | 10-16-2014 |
20140320504 | Virtual Re-Animation - Exemplary embodiments of the invention provide a method, apparatus and computer readable memory for presenting an interactive virtual rendition of a historical person. The method includes compiling a pre-fetch database about the historical person comprising one or more writings, audio and video records of or about the historical person and presenting a multimedia virtual rendering of the historical person, the rendering comprising at least audio content and at least one of personal mannerisms and personality traits derived from the pre-fetch database as attributed to the historical person. In this embodiment the method also comprises in response to real-time inputs from interaction with the virtual rendering of the historical person, using at least the pre-fetch database to compile a reply to the real time inputs, the compiled reply comprising at least one of audio content and personal mannerisms that are selected and virtually rendered in dependence on the real-time inputs. | 10-30-2014 |
20140320505 | GREYSCALE ANIMATION - Greyscale animation. In accordance with a first method embodiment, an animation region of an electronic display is determined. The animation region is set to display a flash-mitigating background color pattern. An animation sequence is displayed in the animation region. The flash-mitigating background color pattern may include a checkerboard pattern of alternating grey colored picture elements. | 10-30-2014 |
20140320506 | MANIPULATING, CAPTURING, AND CONSTRAINING MOVEMENT - A method is disclosed for manipulating the movement of a user's body to achieve a task according to specific rules. This method can be utilized in various training, medical and rehabilitation applications. Another method for capturing the movement of a user' body is disclosed. This method can be used in different gaming, filmmaking and computer applications. Also, a method is disclosed for constraining the movement of a user's body according to an imaginary or virtual object. This method can be employed in several virtual reality and augmented reality applications. | 10-30-2014 |
20140327680 | AUTOMATED VIDEO LOOPING WITH PROGRESSIVE DYNAMISM - Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video. | 11-06-2014 |
20140327681 | ANIMATION CONTROL METHODS AND SYSTEMS - Animation control methods and systems. In one embodiment, a method to control animations includes receiving data representing content of a page (e.g. a web page), detecting, from the data, whether the page includes animated content, determining whether to halt execution of the detected animated content, and halting execution of the animated content if a determination to halt was made. In one implementation, the content can be configured into a document object model (DOM) and decisions to halt or not to halt can be made on a node-by-node basis within the DOM. In one implementation, the animated content can be allowed to execute for a shortened duration (e.g. in order to allow a user to see it) and then is halted. | 11-06-2014 |
20140340409 | GENERATING PHOTO ANIMATIONS - Implementations generally relate to generating photo animations. In some implementations, a method includes receives a plurality of photos from a user. The method also includes selecting photos from the plurality of photos that meet one or more predetermined similarity criteria. The method also includes generating an animation using the selected photos. | 11-20-2014 |
20140347368 | NAVIGATION SYSTEM WITH INTERFACE MODIFICATION MECHANISM AND METHOD OF OPERATION THEREOF - A method of operation of a navigation system includes: aggregating context information for capturing a current context of a user; and modifying a navigation avatar based on the context information for displaying on a device. | 11-27-2014 |
20140347369 | METHOD AND DEVICE FOR DISPLAYING CHANGED SHAPE OF PAGE - Exemplary embodiments disclose a method and device for displaying a changed shape of a page. The method includes: receiving a user touch input on the page; calculating a virtual touch force which acts on a first node on the page based on the user touch input; calculating a virtual spring force which acts on the first node by at least one virtual spring which is connected to the first node based on the calculated virtual touch force; calculating a virtual rod force which acts on the first node by at least one virtual rod which is connected to the first node based on the calculated virtual touch force; and moving the first node based on the virtual touch force, the virtual spring force and the virtual rod force. | 11-27-2014 |
20140347370 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING COMPUTER PROGRAM PRODUCT - An information processing device, method and computer program product provide mechanisms for making a moving photograph. The information processing apparatus includes a moving area detector configured to detect a moving area in images of a processing target image group. The processing target image group includes a base image and a plurality of reference images. A display controller causes the base image to be displayed along with each of the plurality of reference images in succession. | 11-27-2014 |
20140354653 | DISPLAY DEVICE AND COMPUTER - A display device converts an animation file into first binary data in a data format which can be processed by a first graphics library of a first display, the binary data including a DL, and converts the converted first binary data into second binary data in a data format which can be processed by a second graphics library of a second display. | 12-04-2014 |
20140362091 | ONLINE MODELING FOR REAL-TIME FACIAL ANIMATION - Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system. | 12-11-2014 |
20140368511 | INTERACTIVE VISUALIZATION FOR HIGHLIGHTING CHANGE IN TOP AND RELATIVE RANKING - Displaying a visualization of ranked elements over a selected dimension. The method includes determining a user selection of a ranking function for a plurality of elements. The ranking function defines a core value to be ranked. The method further includes determining a dimension over which the core value of the ranking function output can change. The method further includes animating a relevant number of the elements over time. A time value of the animation correspond to values of the dimension. Animating includes displaying elements in prominence corresponding to the result of the output of the ranking function. | 12-18-2014 |
20140375656 | MULTI-LAYER ANIMATION ENVIRONMENT - A computer-controlled method can include an electronic display visually presenting a digital character and multiple layers within an animation environment, the digital character initially “residing” within a first one of the layers. Responsive to a user “sliding” the digital character in a certain direction, the digital character may “move” to a second one of the layers within the environment. | 12-25-2014 |
20150022532 | Hierarchical Motion Blur Rasterization - Motion blur rasterization may involve executing a first test for each plane of a tile frustum. The first test is a frustum plane versus moving bounding box overlap test where planes bounding a moving primitive are overlap tested against a screen tile frustum. According to a second test executed after the first test, for primitive edges against tile corners, the second test is a tile corner versus moving edge overlap test. The corners of the screen space tile are tested against a moving triangle edge in two-dimensional homogeneous space. | 01-22-2015 |
20150035838 | ANIMATIONS - At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer. | 02-05-2015 |
20150042662 | SYNTHETIC AUDIOVISUAL STORYTELLER - A method of animating a computer generation of a head and displaying the text of an electronic book, such that the head has a mouth which moves in accordance with the speech of the text of the electronic book to be output by the head and a word or group of words from the text is displayed while simultaneously being mimed by the mouth,
| 02-12-2015 |
20150049093 | Preloading Animation Files In A Memory Of A Client Device - A digital magazine presents content items to a user including one or more animation files. An animation file includes a plurality of frames that each has a variable display duration. To improve presentation of an animation file, a number of frames of the animation file that are preloaded into a memory of the client device on which the animation file is presented is determined based on contextual features describing computing resources available to the client device and on the display duration of frames of the animation file subsequent to a currently displayed frame of the animation file. Additionally, an animation file may be selected for preloading and display from a plurality of animation files based on a ranking the animation files. | 02-19-2015 |
20150054834 | GENERATING MOBILE-FRIENDLY ANIMATIONS - Systems and methods are disclosed for generating mobile-friendly animations. In one implementation, a processing device receives a first animation in a first format, the first animation including one or more of graphical components. The processing device processes the first animation to identify, with respect to the first animation, one or more animation instructions. The processing device generates, based on the first animation and the one or more animation instructions, a second animation in a second format, the second animation including (a) one or more components that correspond to the plurality of graphical components and (b) one or more animation instructions. | 02-26-2015 |
20150054835 | CHANGING VISUAL CONTENT COMMUNICATION - Techniques for presenting changing visual content, including video, animation and so on, as an overlay are discussed. Changing visual content, included in a visual presentation, may be identified from other visual elements included in the visual presentation. The changing visual content may be manipulated based on available resources associated with presenting the changing visual content as an overlay for a client. | 02-26-2015 |
20150062130 | LOW POWER DESIGN FOR AUTONOMOUS ANIMATION - Devices, methods, and non-transitory media for controlling a microdisplay are described. A device includes a microdisplay; and a display controller for the microdisplay. The display controller includes at least one frame buffer configured to store a multi-frame animation; and control logic configured to control operation of the microdisplay in response to signals generated by a host controller of the mobile electronic device, the control logic comprising executable instructions to display the animation on the microdisplay by: commencing display of the animation when signals representing a start command generated by the host controller are detected, and repeatedly displaying the animation in the absence of detecting further signals representing commands generated by the host controller. | 03-05-2015 |
20150062131 | RUN-TIME TECHNIQUES FOR PLAYING LARGE-SCALE CLOUD-BASED ANIMATIONS - Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior. | 03-05-2015 |
20150062132 | MULTI-CHARACTER AUTHORING INTERFACE FOR LARGE SCALE ANIMATIONS - Various of the disclosed embodiments relate to systems and methods for providing animated multimedia, e.g. animated shows, to an audience over a network. Particularly, some embodiments provide systems and methods for generating and providing audio, animation, and other experience-related information so that user's may readily experience the content in a seamless manner (e.g., as an audience member watching a show, playing a video game, etc.). Various embodiments animate “to the audience” based, e.g., on what content the audience is consuming. The animations may be generated in real-time from constituent components and assets in response to user behavior. | 03-05-2015 |
20150077421 | CREATING A CINEMAGRAPH - An apparatus receives first image data that is based on image data captured by a first camera and second image data that is based on image data captured by a second camera. The apparatus then creates a cinemagraph using the first image data for a static part of the cinemagraph and using the second image data for an animated part of the cinemagraph. Another apparatus could provide at least one of the first and second image data based on a detected corresponding element. | 03-19-2015 |
20150084967 | DEPTH MAP MOVEMENT TRACKING VIA OPTICAL FLOW AND VELOCITY PREDICTION - Techniques for efficiently tracking points on a depth map using an optical flow are disclosed. In order to optimize the use of optical flow, isolated regions of the depth map may be tracked. The sampling regions may comprise a 3-dimensional box (width, height and depth). Each region may be “colored” as a function of depth information to generate a “zebra” pattern as a function of depth data for each sample. The disclosed techniques may provide for handling optical flow tracking when occlusion occurs by utilizing a weighting process for application of optical flow vs. velocity prediction to stabilize tracking. | 03-26-2015 |
20150109309 | UNIFIED POSITION BASED SOLVER FOR VISUAL EFFECTS - A method for simulating visual effects is disclosed. The method comprises modeling each visual effect within a simulation as a set of associated particles with associated constraints applicable thereto. It also comprises predicting first velocities and first positions of a plurality of particles being used to simulate a visual effect based on an external force applied to the plurality of particles. Next, it comprises identifying a set of neighboring particles for each of the plurality of particles. The method also comprises solving a plurality of constraints related to the visual effect, wherein each of the plurality of constraints is solved for the plurality of particles in parallel. Lastly, responsive to the solving, the method comprises determining second velocities and second positions for the plurality of particles. | 04-23-2015 |
20150130816 | COMPUTER-IMPLEMENTED METHODS AND SYSTEMS FOR CREATING MULTIMEDIA ANIMATION PRESENTATIONS - A computer-implemented method and system are provided for creating a multimedia animation presentation. The method includes the steps of: (a) receiving a plurality of visual content items; (b) analyzing the visual content items; (c) generating a set of technical instructions in real time for deterministically rendering a multimedia animation presentation from the visual content items based on a given theme; and (d) transmitting the technical instructions and the visual content items to a browser or mobile app on a user device to be executed by the browser or the mobile app to render the multimedia animation presentation and play it to the user on the user device. | 05-14-2015 |
20150138209 | SYSTEM AND METHOD FOR REAL-TIME POSE-BASED DEFORMATION OF CHARACTER MODELS - Systems and methods for animating a character model by deforming the character model based on poses. Embodiments may contain a modeling component in which a user may create a character model that contains a rig representing animation controls applied to the model, and geometric/graphic parameters for graphically rendering the model. The user also may create directed graphs that contain nodes representing operations that act on the character model and directional connections representing data flow between nodes. The embodiments may contain a compiling component that convert a directed graph into a sequence of instructions that perform the operations denoted at the nodes. The embodiments provide tools and methods to reduce redundancies in the sequence of instructions producing an optimized version of instruction sequence. The resulting instructions are then convertible into machine code for running on a video game device or loaded into a plug-in of a graphic rendering engine. | 05-21-2015 |
20150294493 | TECHNIQUES FOR PROVIDING CONTENT ANIMATION - Techniques for enabling content animation in substantially real time are disclosed. In one embodiment, a method for content animation in substantially real time includes providing content for display including at least one image, detecting an interaction of a user with respect to the at least one image, determining a boundary within the image comprising a contour of an object represented in the at least one image, applying a set of movement constraints to the object within the determined boundary, and enabling animation of a portion of the at least one image corresponding to the object in the displayed content based at least in part upon the applied movement constraints. The animation may occur in response to a user selecting a portion of the object and moving the object within the selected image. | 10-15-2015 |
20150302627 | APPARATUS AND METHOD FOR PERFORMING MOTION CAPTURE USING A RANDOM PATTERN ON CAPTURE SURFACES - A method is described comprising: applying a random pattern to specified regions of an object; tracking the movement of the random pattern during a motion capture session; and generating motion data representing the movement of the object using the tracked movement of the random pattern. | 10-22-2015 |
20150310579 | COMPOSITOR SUPPORT FOR GRAPHICS FUNCTIONS - Apparatuses, methods and storage medium associated with operating a graphics application are disclosed herein. In embodiments, an apparatus may include a general purpose processor, a graphics processor, and memory configured to hold a graphics commands buffer and a compositor command buffer. The apparatus may further include a compositor configured to: insert a plurality of viewport commands associated with a plurality of graphics functions into the compositor, command buffer, and copy a plurality of graphics commands of the graphics functions, from the graphics command buffer to the compositor command buffer. The graphics functions may be associated with a graphics application, and share a common context with the compositor. Other embodiments may be described and claimed. | 10-29-2015 |
20150310659 | PERIOPERATIVE MOBILE COMMUNICATION SYSTEM AND METHOD - An embodiment provides a mobile application that animates change information in a way that specifically indicates a change in workflow information for various users. This animation of change information permits users, which are often busy healthcare professionals, to be quickly apprised of relevant changes to workflow status. The mobile application also allows users to communicate change information, e.g., for updating the status of a workflow item, which may then be propagated throughout a network, including mobile devices. | 10-29-2015 |
20150325026 | Methods and Systems for Adjusting Animation Duration - A device that includes one or more processors may determine a configuration of a display region of the device. The device may also receive a request to perform an animation of a virtual object within the display region. The request may be indicative of a given duration for the animation based on the animation being performed within a given display region having a given configuration. The device may also modify the given duration to determine an adjusted duration for the animation based on a comparison between the configuration and the given configuration. The device may also perform the animation within the display region based on the animation having the adjusted duration. | 11-12-2015 |
20150325030 | SCALE SEPARATION IN HAIR DYNAMICS - Techniques are disclosed for accounting for features of computer-generated dynamic or simulation models being at different scales. Some examples of dynamic or simulation models may include models representing hair, fur, strings, vines, tails, or the like. In various embodiments, features at different scales in a complex dynamic or simulation model can be treated differently when rendered and/or simulated. | 11-12-2015 |
20150339840 | Dynamic Splitting of Content - Methods and systems for dynamically splitting content are disclosed. In some embodiments, content may be received that includes one or more elements to be animated. It may be determined that a size of at least one element of the one or more elements to be animated exceeds a threshold. The at least one element having the size that exceeds the threshold may be split into a plurality of sub-elements. A transform of at least one of the sub-elements may be modified. | 11-26-2015 |
20150339841 | LAYOUT ANIMATION PANEL - Layout animation that automatically plays in response to a change in layout on UI platforms that typically require animations to be defined prior to the layout being calculated is provided. Developers are enabled to specify how one or more elements should animate via animation values that are relative to an unknown initial layout and an unknown final layout. When a property change event that triggers animation of an element occurs, the initial layout and the final layout of the element and its child elements are calculated. The animations are then scheduled to interpolate the changes in layout. | 11-26-2015 |
20150348223 | PERFORMANCE CONTROL FOR CONCURRENT ANIMATIONS - The embodiments set forth a technique for targeted scaling of the voltage and/or frequency of hardware components included in a mobile computing device. One embodiment involves independently analyzing the individual frame rates of each animation within a user interface (UI) of a mobile computing device instead of analyzing the frame rate of the UI as a whole. This can involve establishing, for each animation being displayed within the UI, a corresponding performance control pipeline that generates a control signal for scaling a performance mode of the hardware components (e.g., a Central Processing Unit (CPU)) included in the mobile computing device. In this manner, the control signals generated by the performance control pipelines can be aggregated to produce a control signal that causes a power management component to scale the performance mode(s) of the hardware components. | 12-03-2015 |
20150356347 | METHOD FOR ACQUIRING FACIAL MOTION DATA - A method for acquiring facial motion data includes a video reference and a timing cue to guide and instruct an actor performing a facial expression. The timing cue may include a video component and/or an audio component. | 12-10-2015 |
20150363954 | OPTIMIZED SERIAL TEXT DISPLAY FOR CHINESE AND RELATED LANGUAGES - Various embodiments are disclosed for optimizing serial display of text comprising Chinese characters (which appear in Chinese and in other languages). Some embodiments comprise parsing a Chinese text into display elements for serial display, each display element comprising a short phrase (generally one or two words) that is five or fewer characters in length. In some embodiments, this includes parsing the text into sentences and words and then analyzing sequences of words as part of parsing the text into short phrases (generally one or two words in length) for creating display elements that are five or fewer Chinese characters in length. In some embodiments, various aspects of the text are analyzed to determine whether and how to vary display time of a display element from a default display time. In some embodiments, phrases are displayed such that an optimal recognition positing of the displayed phrase is located at a fixed display location. These and other embodiments are more fully described herein. | 12-17-2015 |
20150363959 | SEAMLESS REPRESENTATION OF VIDEO AND GEOMETRY - Processes for reviewing and editing a computer-generated animation are provided. In one example process, multiple images representing segments of a computer-generated animation may be displayed. In response to a selection of one or more of the images, geometry data associated with the corresponding segment(s) of computer-generated animation may be accessed. An editable geometric representation of the selected segment(s) of computer-generated animation may be displayed based on the accessed geometry data. In some examples, previously rendered representations and/or geometric representations of the same or other segments of the computer-generated animation may be concurrently displayed adjacent to, overlaid with, or in any other desired manner with the displayed geometric representation of the selected segment(s) of computer-generated animation. | 12-17-2015 |
20150363960 | TIMELINE TOOL FOR PRODUCING COMPUTER-GENERATED ANIMATIONS - A method of creating a computer-generated animation uses a graphical user interface including a two-dimensional array of cells. The array has a plurality of rows associated with visual characteristics of a computer-generated character and a plurality of columns associated with frames of the animation. The array includes a first cell associated with a first visual characteristic and a first frame. A first view of the array is displayed in which the first cell has a first width and includes a key frame indicator that indicates that a designated value is associated with the first visual characteristic for the first frame. A second view is displayed in which the first cell has a second width and includes an element value indicator. The second width is greater than the first width, and the element value indicator represents the value associated with the first visual characteristic. | 12-17-2015 |
20150363961 | INFORMATION PROCESSING APPARATUS, DATA DIVISION METHOD, AND DATA DIVISION PROGRAM - An information processing apparatus includes an operation unit, and a control unit that displays, on a display unit, a data image representing the contents of temporally continuous data along a time axis. Here, when a gesture operation for cutting the data image perpendicularly to the time axis is performed through the operation unit, the control unit divides the data at a position on the time axis in accordance with the position cut by the gesture operation. | 12-17-2015 |
20150371426 | MOTION COVERS - In one embodiment, a method comprising by one or more computing devices, receiving, from an another computing device, a resulting media signal file comprising a first portion of a media signal file comprising an audio track, and a second portion of an another media signal file comprising one or more frames of a graphic art image, extracting and displaying the second portion of the resulting media signal file, wherein the one or more frames of the graphic art image stream in a first sequence and the one or more frames of the graphic art image appear to be moving. | 12-24-2015 |
20150379906 | SYSTEMS AND METHODS FOR RULE-BASED ANIMATED CONTENT OPTIMIZATION - At least one aspect of the present disclosure describes a computer-implemented system for optimization of animated content. The system includes a rule management module, a content generation module, and a content evaluation module. The content generation module is operative to generate an animated content configuration in accordance with a set of rules on content generation. The animated content configuration includes an initial configuration and a transition and is designed with a particular optimization objective. The content evaluation module is operative to evaluate content performance on reaching the particular optimization objective based on data acquired when a piece of animated content assembled from the animated content configuration is displayed. The rule management module is operative to amend the set of rules based on the evaluated content performance. | 12-31-2015 |
20150381937 | FRAMEWORK FOR AUTOMATING MULTIMEDIA NARRATIVE PRESENTATIONS - Disclosed herein are technologies for implementing an automated multimedia narrative—presentation to one or more users. In some implementations, the user selects and views an annual presentation report from a presented listing of digital multimedia narratives. Contents of the annual presentation report may be derived from stored information in the database and the derived contents are subsequently mapped to a humanlike—animation for delivery to the one or more users. | 12-31-2015 |
20160019708 | Armature and Character Template for Motion Animation Sequence Generation - Embodiments of the invention are directed an animation kit including a template page with at least one template design, an armature that moves between at least a first position and a second position, and an animation application that generates an animated segment corresponding to the template design and at least one pose of the armature. In further embodiments, a method for generating an animated segment is provided. In another embodiment, a system for generating an animated sequence includes a template design and an application that receives an image of the template design and animates at least one three-dimensional image corresponding to the captured template design. | 01-21-2016 |
20160027198 | ANIMATED AUDIOVISUAL EXPERIENCES DRIVEN BY SCRIPTS - In an embodiment, a computerized method comprises receiving a meta-language file comprising a conversion of a script file in a natural language format, the script file including a plurality of natural language statements; interpreting, by a first computing device, the meta-language file to execute at least a first portion of the meta-language file; dynamically generating and displaying, on the first computing device, one or more visually animated graphical elements in accordance with the execution of at least the first portion of the meta-language file. | 01-28-2016 |
20160027202 | DISPLAYING METHOD, ANIMATION IMAGE GENERATING METHOD, AND ELECTRONIC DEVICE CONFIGURED TO EXECUTE THE SAME - A method for of playing an animation image, the method including: obtaining a plurality of images; displaying a first image of the plurality of images; detecting a first event as a trigger to play the animation image for a first object of the first image; and playing the animation image for the first object using the plurality of images. | 01-28-2016 |
20160035061 | GENERATING AND UTILIZING DIGITAL AVATAR DATA FOR ONLINE MARKETPLACES - Disclosed are a system comprising a computer-readable storage medium storing at least one program and a computer-implemented method for digital avatars. An interface module receives a request message to determine measurements of a user. A graphics engine sub-module accesses a first set of data that is indicative of locations in a first image of a user. The locations are points of the user's body in the first image. The graphics engine sub-module accesses a second set of data that is indicative of a first physical-space measurement of the user. A computational sub-module determines, based at least partly on the locations and the first physical-space measurement characteristic, an estimate of a second physical-space measurement of the user. | 02-04-2016 |
20160035119 | METHOD AND APPARATUS FOR CONTROLLING DISPLAY AND COMPUTER PROGRAM FOR EXECUTING THE METHOD - A display control method includes displaying at least a portion of a page, in which one or more content regions including contents are arranged, on a screen image; recognizing a scrolling operation with respect to the page; and scrolling the page based on the scrolling operation and applying an animation with respect to content included in a content region exposed on the screen image as the page is scrolled. | 02-04-2016 |
20160035120 | THREE DIMENSIONAL ANIMATION OF A PAST EVENT - Methods and systems are disclosed for rendering a three dimensional animation of a past event. A request is received, at a processor, for re-creation of a past event associated with a job site. Input data is accessed, at the processor, regarding the past even associated with the job site wherein the input data pertains to past movements and lifts of at least one lifting device associated with the job site. A three dimensional (3D) animation is generated, at the processor, of the past event involving the past movements of the at least one lifting device. The 3D animation is displayed, on a display, depicting the past event wherein the displaying comprises playback controls for controlling the displaying. | 02-04-2016 |
20160035122 | Visual Function Targeting Using Varying Contrasting Areas - A solution for targeting a visual function using varying contrasting areas is provided. An animation including a changing figure can be generated. The changing figure can include contrasting areas having attributes that change substantially continually during the animation. For example, a location of the contrasting areas within the changing figure can be changed to create an appearance of motion of the contrasting areas within the changing figure. The shape attributes can be determined based on a target visual function, a target performance level of the target visual function, and a plurality of display attributes of a display environment for an observer. The animation can be provided for display to the observer, and an indication of whether the observer is able to perceive the changes can be received and used to assess a performance level of the visual function. | 02-04-2016 |
20160035123 | CUSTOMIZABLE ANIMATIONS FOR TEXT MESSAGES - A method and system for transforming simple user input into customizable animated images for use in text-messaging applications. | 02-04-2016 |
20160042548 | FACIAL EXPRESSION AND/OR INTERACTION DRIVEN AVATAR APPARATUS AND METHOD - Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial mesh tracker to receive a plurality of image frames, detect facial action movements of a face and head pose gestures of a head within the plurality of image frames, and output a plurality of facial motion parameters and head pose parameters that depict facial action movements and head pose gestures detected, all in real time, for animation and rendering of an avatar. The facial action movements and head pose gestures may be detected through inter-frame differences for a mouth and an eye, or the head, based on pixel sampling of the image frames. The facial action movements may include opening or closing of a mouth, and blinking of an eye. The head pose gestures may include head rotation such as pitch, yaw, roll, and head movement along horizontal and vertical direction, and the head comes closer or goes farther from the camera. Other embodiments may be described and/or claimed. | 02-11-2016 |
20160048365 | MULTI-DISPLAY POWER-UP ANIMATION SYSTEMS AND METHODS - A portable computing device includes a user interface system including at least a first display and a second display, a memory storing a series of images, a user input device configured to receive an input from a user corresponding to changing a power state of the portable computing device, and a processing circuit coupled to the user interface system, the memory, and the user input device. The processing circuit is configured to receive the input from the user corresponding to changing the power state, determine based on the received input if the input satisfies a trigger condition, and in response to determining that the trigger condition is satisfied, display the series of images on the first display and the second display such that at least one image is displayed on the first display and the second display during a display sequence. | 02-18-2016 |
20160055663 | INTERACTIVE SLIDE DECK - According to an example, a series of video frames may be accessed, in which a first set of the video frames depicts segments of an entity moving relative to other entities in the video frames and wherein a second set of the video frames depicts static content. In addition, the video frames in the first set of video frames may be generated into animated image files and the video frames in the second set of video frames into single image files. Furthermore, the animated image files and the single image files may be arranged into an interactive slide deck. | 02-25-2016 |
20160055664 | METHOD AND SYSTEM FOR ASSEMBLING ANIMATED MEDIA BASED ON KEYWORD AND STRING INPUT - One aspect of the invention is a method for automatically assembling an animation. According to this embodiment, the method includes accepting at least one input keyword relating to a subject for the animation and accessing a set of templates. In this embodiment, each template generates a different type of output, and each template includes components for display time, screen location, and animation parameters. The method also includes retrieving data from a plurality of websites or data collections using an electronic search based on the at least one input keyword and the templates, determining which retrieved data to assemble into the set of templates, coordinating assembly of data-populated templates to form the animation, and returning the animation for playback by a user. | 02-25-2016 |
20160063750 | Stop-Motion Video Creation From Full-Motion Video - In embodiments of stop-motion video creation from full-motion video, a video of an animation sequence is filmed with a video camera that captures an animation object and manipulations to interact with the animation object. Motion frames of the video are determined, where the motion frames depict motion as the manipulations to interact with the animation object. The motion frames may also depict other motion, other than the manipulations to interact with the animation object, where the other motion is also captured when the video is filmed. The motion frames that depict the motion in the video are discarded, leaving static frames that depict the animation object without any detectable motion. A frame sequence of the static frames can then be generated as a stop-motion video that depicts the animation object to appear moving or created without the manipulations. | 03-03-2016 |
20160063751 | ANIMATION ENGINE FOR BLENDING COMPUTER ANIMATION DATA - Computer-generated images are generated by evaluating point positions of points on animated objects in animation data. The point positions of the points are used by an animation system to determine how to blend animated sequences or frames of animated sequences in order to create realistic moving animated characters and animated objects. The methods of blending are based on determining distances or deviations between corresponding points and using blending functions with varying blending windows and blending functions that can vary from point to point on the animated objects. | 03-03-2016 |
20160063989 | NATURAL HUMAN-COMPUTER INTERACTION FOR VIRTUAL PERSONAL ASSISTANT SYSTEMS - Technologies for natural language interactions with virtual personal assistant systems include a computing device configured to capture audio input, distort the audio input to produce a number of distorted audio variations, and perform speech recognition on the audio input and the distorted audio variants. The computing device selects a result from a large number of potential speech recognition results based on contextual information. The computing device may measure a user's engagement level by using an eye tracking sensor to determine whether the user is visually focused on an avatar rendered by the virtual personal assistant. The avatar may be rendered in a disengaged state, a ready state, or an engaged state based on the user engagement level. The avatar may be rendered as semitransparent in the disengaged state, and the transparency may be reduced in the ready state or the engaged state. Other embodiments are described and claimed. | 03-03-2016 |
20160065992 | EXPORTING ANIMATIONS FROM A PRESENTATION SYSTEM - A user input mechanism is displayed within a presentation system that allows a user to specify a certain portion of a selected slide (in a slide presentation), that has animations applied to it, that is to be exported in a selected export format. Information describing the specified portion of the selected slide, and information describing the animations applied to that portion, is obtained. An export file is generated with the specified portions of the slide, and the corresponding animations, in the selected export format. | 03-03-2016 |
20160071302 | SYSTEMS AND METHODS FOR CINEMATIC DIRECTION AND DYNAMIC CHARACTER CONTROL VIA NATURAL LANGUAGE OUTPUT - A method for executing cinematic direction and dynamic character control via natural language output is provided. The method includes generating a first set of instructions for animation of characters and a second set of instructions for animation of environments; extracting a first set of dialogue elements from a conversant input received in an affective objects module of the processing circuit; extracting a second set of dialogue elements from a natural language system output; analyzing the first and second sets of dialogue elements by an analysis module in the processing circuit for determining emotional content data used to generate an emotional content report; analyzing the first and second sets of dialogue elements by the analysis module in the processing circuit for determining duration data used to generated a duration report; and animating the characters and the environments based on the emotional content report and the duration report. | 03-10-2016 |
20160078661 | ANIMATION ARRANGEMENT - An animation arrangement for a vehicle is provided. The animation arrangement has a display device, configured to display an animation based on an instruction set, a storage device configured to store a first instruction set and a second instruction set for displaying the same animation on the display device, and a calculating device configured to select one of the first and second instruction sets for displaying an animation on the display device. The calculating device is configured to select one of the first and second instruction sets for displaying an animation on the display device based on a load parameter of the calculating device. | 03-17-2016 |
20160078662 | TECHNIQUES AND WORKFLOWS FOR COMPUTER GRAPHICS ANIMATION SYSTEM - The disclosed implementations describe techniques and workflows for a computer graphics (CG) animation system. In some implementations, systems and methods are disclosed for representing scene composition and performing underlying computations within a unified generalized expression graph with cycles. Disclosed are natural mechanisms for level-of-detail control, adaptive caching, minimal re-compute, lazy evaluation, predictive computation and progressive refinement. The disclosed implementations provide real-time guarantees for minimum graphics frame rates and support automatic tradeoffs between rendering quality, accuracy and speed. The disclosed implementations also support new workflow paradigms, including layered animation and motion-path manipulation of articulated bodies. | 03-17-2016 |
20160086366 | Social Identity Models for Automated Entity Interactions - One or more social interactive goals for an automated entity such as an avatar may be determined during a social interaction between the automated entity and a selected entity such as a human. Identity attributes of identity images from an identity model of the automated entity may be used to determine a set of behavioral actions the automated entity is to take for the determined goals. Paralanguage elements expressed for the automated entity via a user interface may be altered based on the determined set of behavioral actions. The automated entity may refer to a computer implemented automaton that simulates a human in the user interface of an interactive computing environment. By way of example, an avatar cybernetic goal seeking behavior may be implemented in accordance with an identity theory model. | 03-24-2016 |
20160086368 | Image Point of Interest Analyser with Animation Generator - An apparatus comprising a point of interest analyser configured to define at least one region within an image as an interest region, determine a position associated with the at least one region; an audio track generator configured to determine at least one audio signal based on the position; and an animated presentation generator configured to generate an animated image comprising the at least one region and the at least one audio signal. | 03-24-2016 |
20160093084 | SUBSPACE CLOTHING SIMULATION USING ADAPTIVE BASES - A method of animation of surface deformation and wrinkling, such as on clothing, uses low-dimensional linear subspaces with temporally adapted bases to reduce computation. Full space simulation training data is used to construct a pool of low-dimensional bases across a pose space. For simulation, sets of basis vectors are selected based on the current pose of the character and the state of its clothing, using an adaptive scheme. Modifying the surface configuration comprises solving reduced system matrices with respect to the subspace of the adapted basis. | 03-31-2016 |
20160104311 | ANIMATION FRAMEWORK - An animation framework for animating arbitrary changes in a visualization via morphing of geometries is provided. Geometry from a visualization is captured from before and after a change to the visualization, which is used to generate a series of frames to provide a smooth morphing animation of the change to the visualization. Transitional geometry representing a merged state between the initial geometry and the final geometry of the visualization is generated to build frames between the initial frame and the final frame. The morphing animation may be governed by a timing curve and may be built according to a display rate to ensure a smooth animation. | 04-14-2016 |
20160104432 | Dynamic Balloon Display Device and Method for Use Thereof - A balloon display device configured to create displays representative of digital images. The device may comprise a processor configured to transmit instructions for creating the display to a display panel, which comprises one or more balloon boxes. Each balloon box may comprise at least one balloon coupled to a pneumatic control. An electronic control can be configured to receive instructions for turning on or off specified valves to inflate or deflate the balloon. An associated method may comprise converting a digital image into readable instructions for creating a balloon display. The instructions, which may comprise commands for inflating or deflating a balloon, may then be transmitted to the display device and executed to create the display. | 04-14-2016 |
20160110907 | Animation Across Multiple Handheld Computing Devices - An animation system of multiple electronic devices having displays includes that each device receives the display number for displaying an animation. Each device also receives its relative position with respect to the other devices. On each device, the user selects an animation for display and determines a start time for displaying the animation. Upon a timer of device reaching the start time, the device displays a portion of the selected animation. The displayed portion is based on the display number and the relative position of the device. The displayed portion changes while the animation moves across the devices. An advantage of the animation system includes that the devices are not required to communicate with each other while receiving the display number and relative position, while selecting the animation, while determining the start time, and while displaying portions of the animation. | 04-21-2016 |
20160125244 | SYSTEMS AND METHODS FOR PROVIDING PIXELATION AND DEPIXELATION ANIMATIONS FOR MEDIA CONTENT - Systems, methods, and non-transitory computer-readable media can detect a trigger to initiate at least one of a pixelation animation or a depixelation animation for a media content item. A set of pixelated images can be generated based on a source image associated with the media content item. Variable durations for presenting the set of pixelated images can be determined. The set of pixelated images can be presented, based on the variable durations, to produce the at least one of the pixelation animation or the depixelation animation. | 05-05-2016 |
20160155255 | METHOD AND APPARATUS FOR GENERATING AN IMAGE | 06-02-2016 |
20160163085 | AUTOMATED VIDEO LOOPING WITH PROGRESSIVE DYNAMISM - Various technologies described herein pertain to generating a video loop. An input video can be received, where the input video includes values at pixels over a time range. An optimization can be performed to determine a respective input time interval within the time range of the input video for each pixel from the pixels in the input video. The respective input time interval for a particular pixel can include a per-pixel loop period and a per-pixel start time of a loop at the particular pixel within the time range from the input video. Moreover, an output video can be created based upon the values at the pixels over the respective input time intervals for the pixels in the input video. | 06-09-2016 |
20160171738 | HEIRARCHY-BASED CHARACTER RIGGING | 06-16-2016 |
20160171740 | REAL-TIME METHOD FOR COLLABORATIVE ANIMATION | 06-16-2016 |
20160171741 | Visual Function Targeting Using Randomized, Dynamic, Contrasting Features | 06-16-2016 |
20160180567 | CONTEXT-AWARE APPLICATION STATUS INDICATORS | 06-23-2016 |
20160180568 | SYSTEM FOR NEUROBEHAVIOURAL ANIMATION | 06-23-2016 |
20160180569 | IMAGE CREATION METHOD, A COMPUTER-READABLE STORAGE MEDIUM, AND AN IMAGE CREATION APPARATUS | 06-23-2016 |
20160180571 | FRAME REMOVAL AND REPLACEMENT FOR STOP-ACTION ANIMATION | 06-23-2016 |
20160180572 | IMAGE CREATION APPARATUS, IMAGE CREATION METHOD, AND COMPUTER-READABLE STORAGE MEDIUM | 06-23-2016 |
20160203630 | METHODS AND SYSTEMS FOR COMPUTER-BASED ANIMATION OF MUSCULOSKELETAL SYSTEMS | 07-14-2016 |
20160203631 | Chart Animation | 07-14-2016 |
20160379398 | WEARABLE DEVICE PROVIDING MICRO-VISUALIZATION - Embodiments are generally directed to a wearable device providing micro-visualization. A wearable electronic device may include a processor to process data; an analytic engine to analyze information relating to a received data point and to generate a micro-visualization based at least in part on the information, wherein the micro-visualization includes at least an image and an animation of the image; and one or more display screens to display the micro-visualization. | 12-29-2016 |
20160379399 | ANIMATIONS - At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer. | 12-29-2016 |
20170236318 | Animated Digital Ink | 08-17-2017 |
20180025506 | AVATAR-BASED VIDEO ENCODING | 01-25-2018 |
20180025526 | APPARATUS AND METHOD FOR PERFORMING MOTION CAPTURE USING A RANDOM PATTERN ON CAPTURE SURFACES | 01-25-2018 |
20180025527 | MULTIPOINT OFFSET SAMPLING DEFORMATION | 01-25-2018 |
20180025528 | MEDIA PLAY DEVICE AND METHOD FOR ACCELERATING ANIMATION PROCESSING | 01-25-2018 |
20190147640 | MOTION BIASED FOVEATED RENDERER | 05-16-2019 |