Entries |
Document | Title | Date |
20080204458 | SYSTEM AND METHOD FOR TRANSFORMING DISPERSED DATA PATTERNS INTO MOVING OBJECTS - A motion-based method and system for rapidly identifying the presence of spatially dispersed or interwoven patterns in data and their deviation from a test model for the pattern includes transforming dispersed patterns into one concentrated moving objects, for which there is a characteristic, identifiable motion signature. The method may be used with data sets containing sharp peaks, such as frequency spectra, and other data sets. A roadmap of basic motion signatures is provided for reference, including multiple harmonic series, separation of odd and even harmonics, missing modes, sidebands and inharmonic patterns. The system and method may also be used with data stored in arrays and volumes. It remaps such data to show both high-resolution information and long range trends simultaneously for applications in nanoscale imaging. | 08-28-2008 |
20080252646 | ENHANCED MOTION BEHAVIOR FRAMEWORK - An enhanced motion behavior framework, in which an input is received from a user corresponding to an object to be animated and one or more animation parameters to be applied to the object, the one or more animation parameters are applied to the object, and an animation of the object is displayed based on the application of the one or more parameters to the object. | 10-16-2008 |
20080273038 | LOOPING MOTION SPACE REGISTRATION FOR REAL-TIME CHARACTER ANIMATION - A method for generating a looping motion space for real-time character animation may include determining a plurality of motion clips to include in the looping motion space and determining a number of motion cycles performed by a character object depicted in each of the plurality of motion clips. A plurality of looping motion clips may be synthesized from the motion clips, where each of the looping motion clips depicts the character object performing an equal number of motion cycles. Additionally, a starting frame of each of the plurality of looping motion clips may be synchronized so that the motion cycles in each of the plurality of looping motion clips are in phase with one another. By rendering an animation sequence using multiple passes through the looping motion space, an animation of the character object performing the motion cycles may be extended for arbitrary length of time. | 11-06-2008 |
20080278497 | PROCESSING METHOD FOR CAPTURING MOVEMENT OF AN ARTICULATED STRUCTURE - The invention concerns a method of obtaining simulated parameters ( | 11-13-2008 |
20080284784 | Image processing device, method, and program, and objective function - An image processing device that models, based on a plurality of frame images being results of time-sequential imaging of an object in motion, a motion of the object using a three-dimensional (3D) body configured by a plurality of parts is disclosed. The device includes: acquisition means for acquiring the frame images being the imaging results; estimation means for computing a first matrix of coordinates of a joint of the 3D body and a second matrix of coordinates of each of the parts of the 3D body, and generating a first motion vector; computing means for computing a second motion vector; and determination means for determining the 3D body. | 11-20-2008 |
20080297518 | Variable Motion Blur - Variable motion blur is created by varying the evaluation time used to determine the poses of objects according to motion blur parameters when evaluating a blur frame. A blur parameter can be associated with one or more objects, portions of objects, or animation variables. The animation system modifies the time of the blur frame by a function including the blur parameter to determine poses of objects or portions thereof associated with the blur parameter in a blur frame. The animation system determines the values of animation variables at their modified times, rather than at the time of the blur frame, and poses objects or portions thereof accordingly. Multiple blur parameters can be used to evaluate the poses of different portions of a scene at different times for a blur frame. Portions of an object can be associated with different blur parameters, enabling motion blur to be varied within an object. | 12-04-2008 |
20080297519 | ANIMATING HAIR USING POSE CONTROLLERS - The present invention deforms hairs from a reference pose based on one or more of the following: magnet position and/or orientation; local reference space position (e.g., a character's head or scalp); and several profile curves and variables. In one embodiment, after an initial deformation is determined, it is refined in order to simulate collisions, control hair length, and reduce the likelihood of hairs penetrating the surface model. The deformed hairs can be rendered to create a frame. This procedure can be performed multiple times, using different inputs, to create different hair deformations. These different inputs can be generated based on interpolations of existing inputs. Frames created using these deformations can then be displayed in sequence to produce an animation. The invention can be used to animate any tubular or cylindrical structure protruding from a surface. | 12-04-2008 |
20080309671 | AVATAR EYE CONTROL IN A MULTI-USER ANIMATION ENVIRONMENT - In a multi-participant modeled virtual reality environment, avatars are modeled beings that include moveable eyes creating the impression of an apparent gaze direction. Control of eye movement may be performed autonomously using software to select and prioritize targets in a visual field. Sequence and duration of apparent gaze may then be controlled using automatically determined priorities. Optionally, user preferences for object characteristics may be factored into determining priority of apparent gaze. Resulting modeled avatars are rendered on client displays to provide more lifelike and interesting avatar depictions with shifting gaze directions. | 12-18-2008 |
20080316213 | TOPOLOGY NAVIGATION AND CHANGE AWARENESS - An apparatus and method are described for displaying a topological graph that allows a user to navigate through a history of previous topology displays to increase the user's understanding and awareness of the state of the topology. In a preferred embodiment, a topology display mechanism receives state changes to a topology of a computer network and stores a sequence of graphs that reflect the changes that are made to the topology. The topology display mechanism also allows the user to step through the sequence of stored topology graphs using “video” type controls to change the display of the topology graphs. In other embodiments, the topology display mechanism displays the changes in the topology as a sequence of graphs that form an animation to give the user a graphical visualization of the changes from one topology graph in the sequence to the next. | 12-25-2008 |
20090009521 | Remote Monitor Having Avatar Image Processing Unit - The present invention is related to a remote monitor having an avatar image processing unit for making access to at least one home appliance so as to be able to communicate therewith for representing information on operation progress and control of the home appliance with an avatar. The remote monitor includes a communication unit accessible remotely to at least one heme appliance so as to be able to communicate therewith for receiving data through a communication unit in the home appliance having a system microcomputer for operating an entire system and the communication unit which makes data communication, and a remote display unit having an avatar image unit for displaying an avatar image according to the data received through the communication unit. | 01-08-2009 |
20090009522 | INFORMATION PROCESSING APPARATUS AND METHOD - In order to improve the operationality of a walk-through system using panorama photography images, the system is provided with a view calculating unit for calculating view information in accordance with a user instruction from an operation unit, the view information including view position information and view direction information; a panorama image storing unit for storing a plurality of panorama images; a path storing unit for storing path information of the panorama images; an advancable path calculating unit for calculating advancable path information at a next dividing point in accordance with the view information and the path information; and an image generating unit for generating a cut-out image from the panorama image in accordance with the view information, generating a sign figure representative of the advancable path in accordance with the advancable path information, and synthesizing the cut-out image and the sign figure to generate a display image. | 01-08-2009 |
20090033667 | Method and Apparatus to Facilitate Depicting an Object in Combination with an Accessory - At a personally portable wireless two-way communicator ( | 02-05-2009 |
20090040231 | Information processing apparatus, system, and method thereof - An information processing apparatus includes a bio-information obtaining unit configured to obtain bio-information of a subject; a kinetic-information obtaining unit configured to obtain kinetic information of the subject; and a control unit configured to determine an expression or movement of an avatar on the basis of the bio-information obtained by the bio-information obtaining unit and the kinetic information obtained by the kinetic-information obtaining unit and to perform a control operation so that the avatar with the determined expression or movement is displayed. | 02-12-2009 |
20090046102 | METHOD AND APPARATUS FOR SPAWNING PROJECTED AVATARS IN A VIRTUAL UNIVERSE - The present invention provides a computer implemented method and apparatus to project a projected avatar associated with an avatar in a virtual universe. A computer receives a command to project the avatar, the command having a projection point. The computer transmits a request to place a projected avatar at the projection point to a virtual universe host. The computer renders a tab associated with the projected avatar. | 02-19-2009 |
20090079745 | System and method for intuitive interactive navigational control in virtual environments - A human-computer-interface design scheme makes possible the creation of an interactive intuitive user navigation system that allows user to issue his intended direction and speed for traversing in the virtual environment with just appropriately positioning a tracker within the operating space. The interface system contains the information about the boundary and center of an arbitrarily-defined static zone within the operating space of the tracker. If the tracker is positioned inside this static zone, the system would interpret it as no traverse is intended. When the user decides to move in a particular direction, he just needs to move the tracker outside the static zone in that direction, and the computer would be able to calculate the intended traverse vector by finding the vector from the center of the static zone to the position of the tracker. The further the tracker is positioned from the static zone, the greater the speed of the intended traverse. | 03-26-2009 |
20090091575 | Method and apparatus for animating the dynamics of hair and similar objects - Animating strands (such as long hair), for movies, videos, etc. is accomplished using computer graphics by use of differential algebraic equations. Each strand is subject to simulation by defining its motion path, then evaluating dynamic forces acting on the strand. Collision detection with any objects is performed, and collision response forces are evaluated. Then for each frame a differential algebraic equations solver is invoked to simulate the strands. | 04-09-2009 |
20090109228 | TIME-DEPENDENT CLIENT INACTIVITY INDICIA IN A MULTI-USER ANIMATION ENVIRONMENT - A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time. | 04-30-2009 |
20090109229 | REDUCING A DISPLAY QUALITY OF AN AREA IN A VIRTUAL UNIVERSE TO CONSERVE COMPUTING RESOURCES - Described herein are processes and devices that reduce a display quality of an area of a virtual universe to conserve computing resources. One of the devices described is a virtual resource conserver. The virtual resource conserver determines, or selects, an area in the virtual universe. A computing resource processes data for presenting the area in the virtual universe. The virtual resource conserver evaluates significance factors about the area to determine a significance of how the area is being used, or an extent to which an area is being viewed by an avatar. The virtual resource conserver reduces a display quality of the area based on the significance of how the area is being used or viewed. The virtual resource conserver thus reduces usage of the computing resource. | 04-30-2009 |
20090128568 | VIRTUAL VIEWPOINT ANIMATION - In one aspect, images of an event are obtained from a first video camera and a second camera, where the second camera captures images at a higher resolution than the first video camera. A particular image of interest is identified from the images obtained by the first video camera, e.g., based on an operator's command. A corresponding image which has been obtained by the second camera is then identified. The second image is used to depict virtual viewpoints which differ from the real viewpoints of the first and second camera, such as by combining data from a textured 3d model of the event with data from the second image. In another aspect, a presentation includes images from a first camera, followed by an animation of different virtual viewpoints, followed by images from a second camera which has a different real viewpoint of the event than the first camera. | 05-21-2009 |
20090128569 | Image display program and image display apparatus - An image display method for displaying an image of a character in a virtual space configures a computer to execute steps of, a parameter setting step for setting parameters concerning movement of the character, a reference unit time setting step for setting a reference unit time of stepwise change of the parameters, a minimal unit time setting step for setting a minimal unit time of a equally divided time of reference unit time, a basic setting step of parameters, for setting the parameters for each reference unit time, and for allocating the parameters for each minimal unit time, a smoothing step for setting a smoothed parameters for each said minimal unit time, the parameters being allocated to the minimal unit time, and a display step for displaying the object according to said parameters set in the smoothing step for each minimal unit time. | 05-21-2009 |
20090153568 | LOCOMOTION GENERATION METHOD AND APPARATUS FOR DIGITAL CREATURE - A locomotion generation method for a digital creature includes: imaging and capturing movements of a creature placed on a base plate having a printed pattern; extracting body position information, body posture information, leg posture information, and footprint information of the creature by analyzing captured images; and generating creature movement by applying inverse kinematics to the body position information, the body posture information, the leg posture information, and the footprint information of the creature. The movements of the creature are imaged and captured by using two or more cameras without camera calibration | 06-18-2009 |
20090153569 | Method for tracking head motion for 3D facial model animation from video stream - A head motion tracking method for three-dimensional facial model animation, the head motion tracking method includes acquiring initial facial motion to be fit to an image of a three-dimensional model from an image inputted by a video camera; creating a silhouette of the three-dimensional model and projecting the silhouette; matching the silhouette created from the three-dimensional model with a silhouette acquired by a statistical feature point tracking scheme; and obtaining a motion parameter for the image of the three-dimensional model through motion correction using a texture to perform three-dimensional head motion tracking. In accordance with the present invention, natural three-dimensional facial model animation based on a real image acquired with a video camera can be performed automatically, thereby reducing time and cost. | 06-18-2009 |
20090160862 | Method and Apparatus for Encoding/Decoding - The present invention relates to a multimedia data encoding/decoding method and apparatus. The encoding method includes generating a data area including a plurality of media data areas; generating a plurality of track areas corresponding to the plurality of media data areas, respectively; and generating an animation area including at least one of grouping information on an animation effect, opacity effect information, size information on an image to which the animation effect is to be applied, and geometrical transform effect information. According to the present invention, the multimedia data encoding/decoding method and apparatus has an effect of being capable of constructing a slide show by only a small amount of multimedia data. Thus, a time taken to process and transmit the multimedia data can reduce. | 06-25-2009 |
20090179901 | BEHAVIORAL MOTION SPACE BLENDING FOR GOAL-DIRECTED CHARACTER ANIMATION - A method for rendering frames of an animation sequence using a plurality of motion clips included in a plurality of motion spaces that define a behavioral motion space. Each motion space in the behavioral motion space depicts a character performing a different type of locomotion, including running, walking, or jogging. Each motion space is pre-processed to that all the motion clips have the same number of periodic cycles. Registration curves are made between reference clips from each motion space to synchronic the motion spaces. | 07-16-2009 |
20090184969 | Rigless retargeting for character animation - Motion may be transferred between portions of two characters if those portions have a minimum topological similarity. The portions or structures of the source and target character topologies may be represented as one or more descriptive files comprised of a hierarchy of data objects including portion identifiers and functionality descriptors associated with portions of the respective source or target topology. To transfer motion between the source and target characters, the motion associated with the portions or structures of the source character identified by a subset of source portion identifiers having corresponding target portion identifiers is determined. This motion is retargeted to and attached to the corresponding portions or structures of the target character identifiers. As a result, the animation of the portions of the target character effectively animates the target character with motion that is similar to that of the source character. | 07-23-2009 |
20090201299 | Pack Avatar for Shared Inventory in a Virtual Universe - Generally speaking, systems, methods and media for providing a pack avatar for sharing inventory in a virtual universe are disclosed. Embodiments of a method may include receiving a request to create a pack avatar carrying one or more shared inventory items in a virtual universe and creating a pack avatar based on the received requests. Embodiments may include rendering the pack avatar in the virtual universe. Embodiments may also include, in response to receiving a request from a virtual universe user to borrow one or more shared inventory items carried by the pack avatar, accessing the one or more requested shared inventory items and rendering the one or more requested shared inventory items in the virtual universe. Further embodiments may include associating the pack avatar with a user and moving the pack avatar within the virtual universe. | 08-13-2009 |
20090207176 | FAST OCEANS AT NEAR INFINITE RESOLUTION - The surface of a body of water can be animated by deconstructing a master wave model into several layer models and then reconstructing the layer models to form an optimized wave model. A wave model is obtained, which describes the wave surfaces in a body of water. The wave model is comprised of a range of wave model frequencies over a given area. A primary layer model, secondary and tertiary layer models are constructed based on portions of the wave model frequencies. An optimized wave model is constructed by combining the primary, secondary, and tertiary layer models. A wave surface point location is determined within the given area. A wave height value is computed for the wave surface point location using the optimized wave model. The wave height value that is associated with the surface point location is stored. | 08-20-2009 |
20090251471 | GENERATION OF ANIMATED GESTURE RESPONSES IN A VIRTUAL WORLD - Responding to gestures made by third parties in a virtual world by receiving a gesture from a first avatar directed to at least one second avatar. For at least one second avatar, a reply gesture may be selected that corresponds to the received gesture. The reply gesture may be output for communication to the first avatar. | 10-08-2009 |
20090267950 | FIXED PATH TRANSITIONS - A computer implemented method, apparatus, and computer program product for fixed path transitions in a virtual universe environment. In one embodiment, tracking data that identifies a location of an avatar in relation to a range of an object in a virtual universe is received. The range comprises a viewable field. In response to the tracking data indicating an occurrence of a trigger condition associated with a fixed path rule, a fixed path defined by the fixed path rule is identified. A speed of movement and an orientation of the object associated with the fixed path rule is identified. Movement of the object along the fixed path defined by the fixed path rule is initiated. The object then moves along the fixed path at the identified speed and with the orientation associated with the fixed path rule. | 10-29-2009 |
20090267951 | Method for rendering fluid - A method for rendering fluid is provided. First, state information of a plurality of fluid particles is provided, wherein the state information records whether the fluid particles are located above or under a fluid surface and the interactions between the fluid particles and a terrain or the dynamic objects. Then, whether to render the fluid particles in a direction facing a viewer or in a direction parallel to the flow direction is determined according to the information that whether the fluid particles are located above or under the fluid surface. Next, the fluid particles are rendered as a plurality of two-dimensional metaballs according to the interactions between the fluid particles and the terrain or the dynamic objects, and these metaballs are stacked to reconstruct the fluid. | 10-29-2009 |
20090295807 | Real-Time Goal-Directed Performed Motion Alignment For Computer Animated Characters - A method for real-time, goal-directed performed motion alignment for computer animated characters. A sequence of periodic locomotion may be seamlessly aligned with an arbitrarily placed and rotated non-periodic performed motion. A rendering application generates a sampling of transition locations for transition from a locomotion motion space to a performed motion space. The sampling is parameterized by control parameters of the locomotion motion space. Based on the location and rotation of a goal location at which the performed motion is executed, a particular transition location may be selected to define a motion plan to which a performed motion sequence may then appended. Advantageously, by utilizing a look-up of pre-computed values for the control parameters of the motion plan, the rendering application may minimize the computational cost of finding the motion plan to move the character to a location to transition to a performed motion. | 12-03-2009 |
20090295808 | Real-Time Goal-Directed Performed Motion Alignment For Computer Animated Characters - A method for real-time, goal-directed performed motion alignment for computer animated characters. A sequence of periodic locomotion may be seamlessly aligned with an arbitrarily placed and rotated non-periodic performed motion. A rendering application generates a sampling of transition locations for transition from a locomotion motion space to a performed motion space. The sampling is parameterized by control parameters of the locomotion motion space. Based on the location and rotation of a goal location at which the performed motion is executed, a particular transition location may be selected to define a motion plan to which a performed motion sequence may then appended. Advantageously, by utilizing a look-up of pre-computed values for the control parameters of the motion plan, the rendering application may minimize the computational cost of finding the motion plan to move the character to a location to transition to a performed motion. | 12-03-2009 |
20090295809 | Real-Time Goal-Directed Performed Motion Alignment For Computer Animated Characters - A method for real-time, goal-directed performed motion alignment for computer animated characters. A sequence of periodic locomotion may be seamlessly aligned with an arbitrarily placed and rotated non-periodic performed motion. A rendering application generates a sampling of transition locations for transition from a locomotion motion space to a performed motion space. The sampling is parameterized by control parameters of the locomotion motion space. Based on the location and rotation of a goal location at which the performed motion is executed, a particular transition location may be selected to define a motion plan to which a performed motion sequence may then appended. Advantageously, by utilizing a look-up of pre-computed values for the control parameters of the motion plan, the rendering application may minimize the computational cost of finding the motion plan to move the character to a location to transition to a performed motion. | 12-03-2009 |
20090309882 | LARGE SCALE CROWD PHYSICS - Systems and methods for creating autonomous agents or objects. Agents reactions to physical forces are modeled as springs. A signal representing a force or velocity change for one animation control is processed to produce realistic reaction effects. The signal may be filtered two or more times, each filter typically having a different time lag and/or filter width. The filtered signals are combined, with weightings, to produce an animation control signal. The animation control signal is then applied to the same or a different animation control to influence motion of the object or agent. | 12-17-2009 |
20090322763 | Motion Capture Apparatus and Method - Provided are an apparatus and a method of effectively creating real-time movements of a three dimensional virtual character by use of a small number of sensors. More specifically, the motion capture method, which maps movements of a human body into a skeleton model to generate movements of a three-dimensional (3D) virtual character, includes measuring a distance between a portion of a human body to which a measurement sensor is positioned and a reference position and rotation angles of the portion, and estimating relative rotation angles and position coordinates of each portion of the human body by use of the measured distance and rotation angles. | 12-31-2009 |
20100013838 | COMPUTER SYSTEM AND MOTION CONTROL METHOD - [PROBLEMS] To naturally and smoothly move a control target such as a virtual actor by using a small data amount and effectively perform data setting for it. | 01-21-2010 |
20100020085 | METHOD FOR AVATAR WANDERING IN A COMPUTER BASED INTERACTIVE ENVIRONMENT - A method for avatar wandering in a computer based interactive environment including for each avatar within a range of a current avatar, obtaining profiles of a user represented by the avatar, for each profile of the user represented by the avatar that has a same profile type as a profile of a user represented by the current avatar, comparing the profiles for matching data, computing a match score for the avatar based on the matching data, and moving the current avatar toward the avatar that has a greatest match score. | 01-28-2010 |
20100060648 | Method for Determining Valued Excursion Corridors in Virtual Worlds - A computer implemented method, computer program product, and a data processing system determine an excursion corridor within a virtual environment. A time-stamped snapshot of a location of at least one avatar within the virtual universe is recorded. An avatar tracking data structure is then updated. The avatar tracking data structure provides a time-based history of avatar locations within the virtual universe. A weighted density map is generated. The weighted density map is then correlated with virtual object locations. Each virtual object location corresponds to a virtual object. Excursion corridors are identified. The excursion corridor identifies frequently taken routes between the virtual object locations. Waypoints are identified. Each waypoint corresponds to a virtual object. Each waypoint is an endpoint for one of the excursion corridors. | 03-11-2010 |
20100060649 | AVOIDING NON-INTENTIONAL SEPARATION OF AVATARS IN A VIRTUAL WORLD - A method for avoiding non-intentional separation of avatars in a virtual world may include detecting a first avatar seeking to enter a first location and determining if a second avatar is related to the first avatar based on a first predetermined rule. The method may also include determining that the first and second avatars are seeking to enter the first location together. The method may further include determining whether to allow the first avatar and the second avatar to enter the first location based on a second predetermined rule. | 03-11-2010 |
20100060650 | MOVING IMAGE PROCESSING METHOD, MOVING IMAGE PROCESSING PROGRAM, AND MOVING IMAGE PROCESSING DEVICE - A moving image processing method includes: an operation item setting step of setting operation items to be operated on the moving image; a time interval setting step of setting which time interval in a reproducing period of the moving image should be defined as an interval in which the operation items that have been set are executable; a display area setting step of setting display areas for images for operations corresponding to the operation items that have been set; an image combining step of combining the images for operations corresponding to the operation items that have been set with the respective frame images, in accordance with the time interval setting step and the display area setting step; and an associating step of associating, with each combined frame image, information concerning the display areas of the images for operations in the frame image and information concerning the operation items, and storing each combined frame image and the associated information. | 03-11-2010 |
20100073383 | Cloth simulation pipeline - A cloth simulation pipeline calculates normals between a cloth image and a colliding object image. The maximum value normal may be used to resolve the collision between the object and cloth images. | 03-25-2010 |
20100079467 | TIME DEPENDENT VIRTUAL UNIVERSE AVATAR RENDERING - Methods, devices, program products and systems are disclosed for displaying multiple virtual universe avatar states. Each of a plurality of avatar states of a first avatar of a first virtual universe user are stored in a storage medium as a function of a time of each state. A first avatar is displayed in a current state to a second user of an engaging second avatar, the engaging instigating a selecting and a retrieving of a subset of plurality of states from the storage medium, each of the subset states different from each other and the current state. Selected subset states are visually displayed to the second user, each of the displayed states visually distinct from another and the current state. The first avatar current state is stored in the storage medium associated with the engagement. | 04-01-2010 |
20100085364 | Foot Roll Rigging - A system and method enables animators to efficiently pose character models' feet. An initial foot model position is received. The initial foot model position specifies a foot model contact point. One or more foot roll parameters are specified that change the relative angle between at least a portion of the foot model and an initial orientation of an alignment plane. Foot roll parameters specify the rotation of the foot model around foot model contact points. Foot roll parameters can include heel roll, ball roll, and toe roll, which specify the rotation of the foot model around contact points on the heel, ball, and toe, respectively, of a foot model. To maintain the position of the foot model contact point, the foot model position is adjusted based on the foot roll parameter. The repositioned foot model is realigned with alignment plane, which restores contact at the foot model contact point. | 04-08-2010 |
20100134500 | APPARATUS AND METHOD FOR PRODUCING CROWD ANIMATION - An apparatus for producing crowd animation includes: a user-input controller for receiving from a user level of detail (LOD) of each individual in a picture of the crowd animation; a simulation controller for performing simulation of the crowd animation for a specific time period to update simulation information on each individual; and a display controller for displaying the picture of the crowd animation by using display information corresponding to the LOD of each individual, the display information being selected among the simulation information. The LOD of each individual indicates: displaying the individual with location information thereof only; displaying the individual with the location and model information thereof; or displaying the individual with the location, model and motion information thereof. The simulation information includes location information, model information and motion information of each individual. | 06-03-2010 |
20100134501 | DEFINING AN ANIMATION OF A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of defining an animation of a virtual object within a virtual world, wherein the animation comprises performing, at each of a series of time points, an update that updates values for object attributes of the virtual object, the method comprising: allowing a user to define the update by specifying, on a user interface, a structure representing the update, wherein the structure comprises a plurality of items and one or more connections between respective items, wherein each item represents a respective operation that may be performed when performing the update and wherein a connection between two items represents that data output by the operation represented by one of those two items is input to the operation represented by the other of those two items; allowing the user to specify that the structure comprises one or more items in a predetermined category, the predetermined category being associated with a predetermined process such that an item belongs to the predetermined category if performing the respective operation represented by that item requires execution of the predetermined process, wherein said predetermined process may be executed at most a predetermined number of times at each time point; and applying one or more rules that (a) restrict how the user may specify the structure to ensure that performing the defined update does not require execution of the predetermined process more than the predetermined number of times, (b) do not require the user to specify that an item in the predetermined category is at a particular location within the structure relative to other items and (c) do not require the user to explicitly specify which operations need to be performed before execution of the predetermined process when performing the update nor which operations need to be performed after execution of the predetermined process when performing the update. | 06-03-2010 |
20100156911 | TRIGGERING ANIMATION ACTIONS AND MEDIA OBJECT ACTIONS - A request may be received to trigger an animation action in response to reaching a bookmark during playback of a media object. In response to the request, data is stored defining a new animation timeline configured to perform the animation action when playback of the media object reaches the bookmark. When the media object is played back, a determination is made as to whether the bookmark has been encountered. If the bookmark is encountered, the new animation timeline is started, thereby triggering the specified animation action. An animation action may also be added to an animation timeline that triggers a media object action at a location within a media object. When the animation action is encountered during playback of the animation timeline, the specified media object action is performed on the associated media object. | 06-24-2010 |
20100156912 | MOTION SYNTHESIS METHOD - A motion synthesis method includes: analyzing a character's gait in motion capture data; creating motion capture data at different speeds having the analyzed gait; and storing the motion capture data at different speeds in a motion capture database. The method further includes: designating restrictions of a sketch including a trajectory and a speed tag of a desired motion; searching and extracting motion capture data corresponding to the speed tag from the motion capture database; and creating a motion satisfying the trajectory through synthesis by blending the motion capture data extracted from the motion capture database. | 06-24-2010 |
20100182328 | METHOD TO ANIMATE ON A COMPUTER SCREEN A VIRTUAL PEN WHICH WRITES AND DRAWS - A method to animate on a computer screen a virtual pen which writes and draws on a virtual blackboard in order to simulate a real pen writing on a real blackboard. Graphemes and drawings ( | 07-22-2010 |
20100182329 | INFORMATION STORAGE MEDIUM, SKELETON MOTION CONTROL DEVICE, AND SKELETON MOTION CONTROL METHOD - A skeleton motion control device that control a motion of a skeleton model in which a parent bone and a child bone are linked via a joint. A movable range setting section sets a movable range of the joint on a projection plane, the projection plane being a plane that is orthogonal to an axis that connects a center point of a sphere and a focus that is a given point on the sphere surface, the center point of the sphere being the joint. A coordinate transformation section projects a point on the sphere surface onto the projection plane based on the focus, the point indicates a direction of the child bone. A skeleton motion calculation section calculates the direction of the child bone with respect to the parent bone within the movable range set by the movable range setting section based on a position of the point projected onto the projection plane. | 07-22-2010 |
20100194763 | User Interface for Controlling Animation of an Object - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions. | 08-05-2010 |
20100201693 | SYSTEM AND METHOD FOR AUDIENCE PARTICIPATION EVENT WITH DIGITAL AVATARS - A system and method for capturing the voice and motion of a user and mapping the captured voice and motion to an avatar is disclosed. Other aspects include displaying the avatar in the virtual world of a movie or animation chosen by the user. | 08-12-2010 |
20100238182 | CHAINING ANIMATIONS - In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate. | 09-23-2010 |
20100259547 | WEB PLATFORM FOR INTERACTIVE DESIGN, SYNTHESIS AND DELIVERY OF 3D CHARACTER MOTION DATA - Systems and methods are described for animating 3D characters using synthetic motion data generated by motion models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, the synthetic motion data is streamed to a user device that includes a rendering engine and the user device renders an animation of a 3D character using the streamed synthetic motion data. In several embodiments, an animator can upload a custom model of a 3D character or a custom 3D character is generated by the server system in response to a high level description of a desired 3D character provided by the user and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character. | 10-14-2010 |
20100277483 | Method and system for simulating character - A method and system for simulating a character is provided. The method of simulating a character including: optimizing motion data by using a displacement mapping and a Proportional Derivative (PD) control; and performing controller training by using the optimized motion data and controlling a motion of the character. In this instance, the optimizing includes: generating a target motion by using the displacement mapping between an input motion and a displacement parameter; and generating a simulated motion by using the target motion and an objective function. | 11-04-2010 |
20100302257 | Systems and Methods For Applying Animations or Motions to a Character - An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions. | 12-02-2010 |
20100302258 | Inverse Kinematics for Motion-Capture Characters - A method for a computer system comprising receiving a displacement for a first object model surface from a user determined in response to a first physical motion captured pose, determining a weighted combination of a first displacement group and a second displacement group from the displacement, wherein the first displacement group is determined from displacements between the first object model surface and a second object model surface, wherein the second object model surface is determined from displacements between a second physical motion captured pose, wherein the second displacement group is determined from displacements between the first object model surface and a third object model surface, wherein the third object model surface is determined from a third physical motion captured pose, determining a fourth object model surface from the first object model surface and the weighted combination, and displaying the fourth object model surface to the user on a display. | 12-02-2010 |
20100328319 | INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD FOR PERFORMING PROCESS ADAPTED TO USER MOTION - A positional data acquisition unit of an action detector acquires positional data indicating the position of an image of a light-emitting part of a light-emitting device held by a user in an image frame at each time step, and also acquires curve data for the head contour at each time step estimated as a result of visual tracking by a tracking processor. A history storage unit stores a history of the positional data for an image of a light-emitting part and the curved data for the head contour. A determination criteria storage unit stores the criteria for determining that a predefined action is performed by referring to the time-dependent change in the relative position of the image of the light-emitting part in relation to the curve representing the head contour. An action determination unit determines whether the action is performed based on the actual data. | 12-30-2010 |
20110012903 | SYSTEM AND METHOD FOR REAL-TIME CHARACTER ANIMATION - A method for generating a motion sequence of a character object in a rendering application. The method includes selecting a first motion clip associated with a first motion class and selecting a second motion clip associated with a second motion class, where the first and second motion clips are stored in a memory. The method further includes generating a registration curve that temporally and spatially aligns one or more frames of the first motion clip with one or more frames of the second motion clip, and rendering the motion sequence of the character object by blending the one or more frames of the first motion clip with one or more frames of second motion clip based on the registration curve. One advantage of techniques described herein is that they provide for creating motion sequences having multiple motion types while minimizing or even eliminating motion artifacts at the transition points. | 01-20-2011 |
20110043529 | INTERACTIVE ANIMATION - An interactive animation environment. The interactive animation environment includes at least one user-controlled object, and the animation method for providing this environment includes determining a position of the user-controlled object, defining a plurality of regions about the position, detecting a user input to move the position of the user-controlled object, associating the detected user input to move the position of the user-controlled object with a region in the direction of movement, and providing an animation of the user-controlled object associated with the mapped region. A system and controller for implementing the method is also disclosed. A computer program and computer program product for implementing the invention is further disclosed. | 02-24-2011 |
20110128292 | DYNAMICS-BASED MOTION GENERATION APPARATUS AND METHOD - A dynamics-based motion generation apparatus includes: a dynamics model conversion unit for automatically converting character model data into dynamics model data of a character to be subjected to a dynamics simulation; a dynamics model control unit for modifying the dynamics model data and adding or modifying an environment model; a dynamics motion conversion unit for automatically converting reference motion data of the character, which has been created by using the character model data, into dynamics motion data through the dynamics simulation by referring to the dynamics model data and the environment model; and a motion editing unit for editing the reference motion data to decrease a gap between reference motion data and dynamics motion data. The apparatus further includes a robot motion control unit for controlling a robot by inputting preset torque values to related joint motors of the robot by referring to the dynamics motion data. | 06-02-2011 |
20110148886 | METHOD AND SYSTEM FOR RECEIVING AN INDEXED LOOK-UP TABLE AND VIEWING A VECTOR ANIMATION SEQUENCE - A method for interactively viewing a vector animation sequence, including receiving an indexed look-up table that stores a plurality of local vector objects associated with tile regions of a first vector image, indicating a request for a desired portion of a second vector image, for display at a specified resolution, determining tile regions of a pre-processed vector image, wherein the pre-processed vector image includes a plurality of tile regions and a plurality of local vector objects, each local vector object being associated with one of the tile regions, requesting at least one tile region of the pre-processed vector image from a server computer, receiving local vector objects and local vector object indices, extracting local vector objects from the indexed look-up table according to the local vector object indices, and generating the desired portion of the second vector image using the received local vector objects and the extracted local vector objects. | 06-23-2011 |
20110181606 | AUTOMATIC AND SEMI-AUTOMATIC GENERATION OF IMAGE FEATURES SUGGESTIVE OF MOTION FOR COMPUTER-GENERATED IMAGES AND VIDEO - In an animation processing system, generating images to be viewable on a display using a computer that are generated based on scene geometry obtained from computer readable storage and animation data representing changes over time of scene geometry elements, but also images can be modified to include shading that is a function of positions of objects at other than the current instantaneous time for a frame render such that the motion effect shading would suggest motion of at least one of the elements to a viewer of the generated images. Motion effects provide, based on depiction parameters and/or artist inputs, shading that varies for at least some received animation data, received motion depiction parameters, for at least one pixel, a pixel color is rendered based on motion effect program output and at least some received scene geometry, such that the output contributes to features that would suggest the motion. | 07-28-2011 |
20110181607 | System and method for controlling animation by tagging objects within a game environment - A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest. | 07-28-2011 |
20110187728 | OPTO-MECHANICAL CAPTURE SYSTEM FOR INDIRECTLY MEASURING THE MOVEMENT OF FLEXIBLE BODIES AND/OR OBJECTS - Opto-mechanical motion capture system for indirectly measuring the movement of bodies and objects, mainly focused on joints of flexible materials, or which have deformations, which makes difficult the instrumentation with rigid sensors such as potentiometers. This invention consists of an image acquisition device or camera and a visualization bed in which there is a series of transmission cables which convey to the visualization bed the movements generated in the flexible parts to be sensed. The camera is set in such a way that it is possible to capture the image of the transmission cables, enabling the determination of its displacement and thus of the sensed objects. The main object of this invention is to enable the measurement of the movements of the flexible parts of the human body in a simple, cheap and comfortable way for the user of the device. | 08-04-2011 |
20110193867 | METHOD AND APPARATUS FOR PRODUCING DYNAMIC EFFECT OF CHARACTER CAPABLE OF INTERACTING WITH IMAGE - A method for producing motion effects of a character capable of interacting with a background image in accordance with the characteristics of the background image is provided, including extracting the characteristics of the background image; determining a character to be provided with the motion effects in the background in accordance with the extracted characteristics of the background image; recognizing external signals including a user input; determining the motion of the character in accordance with the characteristics of the background image and the recognized external signals; and reproducing an animation for executing the motion of the character in the background image. | 08-11-2011 |
20110221755 | BIONIC MOTION - A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user. | 09-15-2011 |
20110267358 | ANIMATING A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of animating a virtual object within a virtual world, wherein the virtual object comprises a plurality of object parts, wherein for a first object part there is one or more associated second object parts, the method comprising: at an animation update step: specifying a target frame in the virtual world; and applying control to the first object part, wherein the control is arranged such that the application of the control in isolation to the first object part would cause a movement of the first object part in the virtual world that reduces a difference between a control frame and the target frame, the control frame being a frame at a specified position and orientation in the virtual world relative to the first object part, wherein applying control to the first object part comprises moving the one or more second object parts within the virtual world to compensate for the movement of the first object part in the virtual world caused by applying the control to the first object part. | 11-03-2011 |
20110273457 | STABLE SPACES FOR RENDERING CHARACTER GARMENTS IN REAL-TIME - Techniques are disclosed for providing a learning-based clothing model that enables the simultaneous animation of multiple detailed garments in real-time. A simple conditional model learns and preserves key dynamic properties of cloth motions and folding details. Such a conditional model may be generated for each garment worn by a given character. Once generated, the conditional model may be used to determine complex body/cloth interactions in order to render the character and garment from frame-to-frame. The clothing model may be used for a variety of garments worn by male and female human characters (as well as non-human characters) while performing a varied set of motions typically used in video games (e.g., walking, running, jumping, turning, etc.). | 11-10-2011 |
20110298810 | MOVING-SUBJECT CONTROL DEVICE, MOVING-SUBJECT CONTROL SYSTEM, MOVING-SUBJECT CONTROL METHOD, AND PROGRAM - A moving-subject control device controls a motion of a moving subject based on motion data indicating the motion of the moving subject, and includes an input unit which receives an input of attribute information indicating an attribute of the moving subject, a generation unit which generates motion data for a user based on the attribute information the input of which is received by the input unit, as motion data for controlling a motion of a moving subject for the user generated based on the attribute information input by the user of the moving-subject control device, and a control unit which varies the motion of the moving subject for the user based on the motion data for the user generated by the generation unit. | 12-08-2011 |
20110304632 | INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control. | 12-15-2011 |
20110304633 | DISPLAY WITH ROBOTIC PIXELS - Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination. | 12-15-2011 |
20120044251 | GRAPHICS RENDERING METHODS FOR SATISFYING MINIMUM FRAME RATE REQUIREMENTS - Methods and devices enable rendering of graphic images at a minimum frame rate even when processing resource limitations and rendering processing may not support the minimum frame rate presentation. While graphics are being rendered, a processor of a computing device may monitor the achieved frame rate. If the frame rate falls below a minimum threshold, the processor may note a current speed or rate of movement of the image and begin rendering less computationally complex graphic items. Rendering of less computationally complex items continues until the processor notes that the speed of rendered items is less than the noted speed. At this point, normal graphical rendering may be recommenced. The aspects may be applied to more than one type of less computationally complex item or rendering format. The various aspects may be applied to a wide variety of animations and moving graphics, as well as scrolling text, webpages, etc. | 02-23-2012 |
20120092348 | SEMI-AUTOMATIC NAVIGATION WITH AN IMMERSIVE IMAGE - A View Track accompanying an immersive movie provides an automatic method of directing the user's region of interest (ROI) during the playback process of an immersive movie. The user is free to assert manual control to look around, but when the user releases this manual control, the direction of the ROI returns gradually to the automatic directions in the View Track. The View Track can also change the apparent direction of the audio from a mix of directional audio sources in the immersive movie, and the display of any metadata associated with a particular direction. A multiplicity of View Tracks can be created to allow a choice of different playback results. The View Track can consist of a separate Stabilization Track to stabilize the spherical image, for improving the performance of a basic Navigation Track for looking around. The recording of the View Track is part of the post production process for making and distributing an immersive movie for improving the user experience. | 04-19-2012 |
20120092349 | PROGRAM EXECUTION SYSTEM, PROGRAM EXECUTION DEVICE AND RECORDING MEDIUM AND COMPUTER EXECUTABLE PROGRAM THEREFOR - A program execution system, has a program execution device which has a controller operated by a user and a display on which images such as characters or players in a game are seen. In order to prevent an incorrect movement of a character on the display when a switching from one scene viewed from one camera viewpoint to another scene viewed from another camera viewpoint without additional steps by the user, the program execution system has a computer-readable and executable program stored on a recorded medium providing a character motion direction step by which, if along the motion of a character on the screen a switching is made from one scene to another, the direction of motion of the character in the second scene is maintained in coordination with the character's motion direction on a map in the first scene at least immediately before the switching. | 04-19-2012 |
20120139925 | System for Estimating Location of Occluded Skeleton, Method for Estimating Location of Occluded Skeleton and Method for Reconstructing Occluded Skeleton - A system for estimating a location of an occluded skeleton, a method for estimating a location of an occluded skeleton and a method for reconstructing an occluded skeleton are provided. The method for estimating a location of an occluded skeleton comprises the following steps: Firstly, a trace of a reference central point of a body is estimated according to a plurality of continuously moving images. Next, a human movement state is estimated according to the trace and a motion information of the continuously moving images free of skeleton occlusion. Then, a possible range of the occluded skeleton for maintaining human balance is calculated according to the human movement state. Afterwards, a current motion level of the occluded skeleton is predicted according to a historic motion information of the occluded skeleton. Lastly, the location of the occluded skeleton is estimated according to the current motion level and the possible range. | 06-07-2012 |
20120147014 | METHOD FOR EXTRACTING PERSONAL STYLES AND ITS APPLICATION TO MOTION SYNTHESIS AND RECOGNITION - Disclosed is a method for automatically extracting personal styles from captured motion data. The inventive method employs wavelet analysis to extract the captured motion vector of different actors into wavelet coefficients, and thus forms a feature vector by optimization selection, which is used later for identification purposes. When the inventive method is applied to process animation frames, the performance can be evaluated by grouping and classification matrix without any correlation with the type of the motion. Also, even if the type of the motion is not stored in the database in advance, the motions of the actor can still be recognized by a learning module regardless of the type of the motions. | 06-14-2012 |
20120154409 | VERTEX-BAKED THREE-DIMENSIONAL ANIMATION AUGMENTATION - A method for controlling presentation of three dimensional (3D) animation includes rendering a 3D animation sequence including a 3D vertex-baked model which is derived from a 3D animation file including vertex data of every vertex for every 3D image frame in the 3D animation sequence. The 3D vertex-baked model includes a control surface that provides a best-fit 3D shape to vertices of the 3D vertex-baked model. The method further includes receiving a motion control input, and if the motion control input is received during an augmentation portion of the 3D animation sequence, deviating from a default posture of the control surface in accordance with the motion control input. | 06-21-2012 |
20120169740 | IMAGING DEVICE AND COMPUTER READING AND RECORDING MEDIUM - Provided are a display device and a non-transitory computer-readable recording medium. By comparing a priority of an animation clip corresponding to a predetermined part of an avatar of a virtual world with a priority of motion data and by determining data corresponding to the predetermined part of the avatar, a motion of the avatar in which motion data sensing a motion of a user of a real world is associated with the animation clip may be generated. | 07-05-2012 |
20120169741 | ANIMATION CONTROL DEVICE, ANIMATION CONTROL METHOD, PROGRAM, AND INTEGRATED CIRCUIT - An animation control device that can suppress reduction in the total quality of the animation to be displayed, and perform animation intended by an application developer is provided. The animation control device ( | 07-05-2012 |
20120212495 | User Interface with Parallax Animation - User interface animation techniques are described. In an implementation, an input having a velocity is detected that is directed to one or more objects in a user interface. A visual presentation is generated that is animated so a first object in the user interface moves in parallax with respect to a second object. The presentation is displayed so the first object appears to moves at a rate that corresponds to the velocity. | 08-23-2012 |
20120229475 | Animation of Characters - An animation method in which a user directs the actions of characters on a virtual stage, rather than instructing every individual movement. Such a method of producing an animated video comprises providing a virtual stage; providing templates from which characters can be assembled, each character having a body and limbs, and the templates providing facial features and clothes with differing colours and shapes; providing objects that can be placed on the virtual stage; placing the objects and the characters on the virtual stage; instructing each character as to his emotional state, and as to any required movement; wherein each character continuously and automatically behaves in accordance with the specified emotional state. Instructions to a character about a desired body movement, such as stepping in one direction or another, or turning on the spot, or walking or running along a specific route, may be provided by a sectored base ring, the sectors displaying arrows that correspond to different steps; while dragging the base ring or a marker along a route across the virtual stage causes the character to follow that route, walking or running depending on how fast the marker had been moved. | 09-13-2012 |
20120293518 | DETERMINE INTENDED MOTIONS - It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like. | 11-22-2012 |
20120306892 | MOBILE BALL TARGET SCREEN AND TRAJECTORY COMPUTING SYSTEM - A mobile target screen is described for ball game practicing and simulation. Tow force sensors are mounted at each of the four corners of the frame which holds a target screen. Measurements form the force sensors are used to compute and display a representation of ball speed, the location of the ball on the target screen, and the direction of the ball motion. These parameters can be used to predict the shooting distance and the landing position of the ball. It also provides enough information to predict the trajectory of the ball which can be displayed on a video screen which communicates with the sensors through a wireless transceiver. | 12-06-2012 |
20130009965 | ANIMATION DISPLAY DEVICE - A converter | 01-10-2013 |
20130033500 | DYNAMIC COLLISION AVOIDANCE FOR CROWD SIMULATION OVER STRUCTURED PATHS THAT INTERSECT AT WAYPOINTS - One embodiment of the invention sets forth a technique for identifying and avoiding impending collisions between moving objects in an animation. Paths traversed by the moving objects intersect at pre-determined intersection points. As a moving object approaches an intersection point, a collision avoidance module determines whether the object is on course to collide with another moving object also approaching the intersection point. If a collision is detected, then the collision avoidance module modifies the speed of the moving object to avoid the collision. | 02-07-2013 |
20130033501 | SYSTEM AND METHOD FOR ANIMATING COLLISION-FREE SEQUENCES OF MOTIONS FOR OBJECTS PLACED ACROSS A SURFACE - Embodiments of the invention set forth a technique for animating objects placed across a surface of a graphics object. A CAD application receives a set of motions and initially applies a different motion in the set of motions to each object placed across the surface of the graphics object. The CAD application calculates bounding areas of each object according to the current motion applied thereto, which are subsequently used by the CAD application to identify collisions that are occurring or will occur between the objects. Identified collisions are cured by identifying valid motions in the set of motions that can be applied to a colliding object and then calculating bounding areas for the valid motions to select a valid motion that, when applied to the object, does not cause the object to collide with any other objects. | 02-07-2013 |
20130057556 | Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment. | 03-07-2013 |
20130083037 | MOVING A DISPLAY OBJECT WITHIN A DISPLAY FRAME USING A DISCRETE GESTURE - A method, system, and computer program product for moving objects such as a display window about a display frame by combining classical mechanics of motion. A window nudging method commences by receiving a discrete user interface gesture from a human interface device such as a mouse click or a keystroke, and based the discrete user interface gesture, instantaneously accelerating the window object to an initial velocity. Once the window is in motion, then the method applies a first animation to animate the window object using realistic motion changes. Such realistic motion changes comprise a friction model that combines sliding friction with fluid friction to determine frame-by-frame changes in velocity. The friction model that combines sliding friction with fluid friction can be applied to any object in the display frame. Collisions between one object and another object or between one object and its environment are modeled using a critically-damped spring model. | 04-04-2013 |
20130093775 | System For Creating A Visual Animation Of Objects - A system for creating visual animation of objects which can be experienced by a passenger located within a moving vehicle is provided. The system includes: a plurality of objects being placed along a movement path of the vehicle; a plurality of sensors being assigned to the plurality of objects and being arranged such along the movement path that the vehicle actuates the sensors when moving along the movement path; and a plurality of highlighting devices being coupled to the plurality of sensors and being controlled by the sensors such that, in accordance with sensor actuations triggered by the movement of the vehicle, a) only one of the plurality of objects is highlighted by the highlighting devices to the passenger at one time, and b) the objects are highlighted to the passenger in such a sequence that the passenger visually experiences an animation of the objects. | 04-18-2013 |
20130113808 | METHOD AND APPARATUS FOR CONTROLLING PLAYBACK SPEED OF ANIMATION MESSAGE IN MOBILE TERMINAL - A method and apparatus for controlling a playback speed of an animation message in a mobile terminal is provided. The method includes recognizing at least one object to be displayed included in the received animation message; determining the playback speed of the received animation message with respect to each object to be displayed according to the recognized feature of each object; and displaying the animation message according to the determined playback speed. | 05-09-2013 |
20130120404 | Animation Keyframing Using Physics - An animation-authoring environment includes a graphical user interface usable by a user to define an initial key frame, including one or more scene entities with one or more respective physics properties. The authoring environment generates a sequence of extrapolated frames from the initial key frame by using a physics simulation to extrapolate respective motion paths for scene entities in the key frame and configuring each frame in the generated sequence to depict each such scene entity at a successive location along its respective extrapolated motion path. The authoring environment may then produce a movie comprising the sequence of frames. | 05-16-2013 |
20130120405 | ANIMATION CREATION AND MANAGEMENT IN PRESENTATION APPLICATION PROGRAMS - An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated. | 05-16-2013 |
20130127877 | Parameterizing Animation Timelines - Methods and systems for parameterizing animation timelines are disclosed. In some embodiments, a method includes displaying a representation of a timeline configured to animate a first image in a graphical user interface, where the timeline includes a data structure having one or more commands configured to operate upon a first property of the first image. The method also includes creating a parameterized timeline by replacing a reference to the first image within the timeline with a placeholder. The method includes, in response to a request to animate a second image, storing an entry in a dictionary of key and value pairs. The method further includes animating the second image by replacing the placeholder in the parameterized timeline with the reference to the second image during execution of the parameterized timeline. | 05-23-2013 |
20130127878 | PHYSICS RULES BASED ANIMATION ENGINE - At an animation authoring component, an inputted movement of an object displayed in a graphical user interface is received. Further, at a physics animation rule engine, a physics generated movement of the object that results from a set of physics animation rules is applied to the inputted movement. In addition, at the graphical user interface, the inputted movement of the object is displayed in addition to the physics generated movement of the object. At the animation authoring component, the physics generated movement of the object in addition to the inputted movement of the object is recorded. | 05-23-2013 |
20130135316 | ANIMATION AUTHORING SYSTEM AND METHOD FOR AUTHORING ANIMATION - This invention relates to an animation authoring system and an animation authoring method, to enable beginners to produce a three-dimensional animation easily and to solve input ambiguity problem in the three-dimensional environment. The animation authoring method according to the invention comprises the steps of: (a) receiving a plane route of an object on a predetermined reference plane from a user; (b) creating a motion window formed along the plane route and having a predetermined angle to the reference plane to receive motion information of the object on the motion window from the user; and (c) implementing an animation according to the received motion information. | 05-30-2013 |
20130162654 | System and method for hiding latency in computer software - A system and method hides latency in the display of a subsequent user interface by animating the exit of the current user interface and animating the entrance of the subsequent user interface, causing continuity in the display of the two user interfaces. During either or both animations, information used to produce the user interface, animation of the entrance of the subsequent user interface, or both may be retrieved or processed or other actions may be performed. | 06-27-2013 |
20130162655 | Systems and Methods for Creating, Displaying, and Using Hierarchical Objects with Nested Components - Methods involving the creation and use of nested components with hierarchical objects are disclosed. One exemplary method comprises displaying a container symbol and defining a movement for the container symbol. The method further comprises defining a nested object within the container symbol, i.e. on a coordinate space associated with the container symbol rather than the general canvas area, and defining a movement for the nested object. Either or both of the movements may involve an inverse kinematics procedure based movement of a hierarchical object, e.g., movement of a bone that causes a shape or rigid body to move. For example, a container symbol could display a car and include a nested hierarchical object that is used to define a person within the car. The movement of the car and the movement of the person can be defined separately by a developer. | 06-27-2013 |
20130176316 | PANNING ANIMATIONS - Panning animation techniques are described. In one or more implementations, an input is recognized by a computing device as corresponding to a panning animation. A distance is calculated that is to be traveled by the panning animation in a user interface output by computing device, the distance limited by a predefined maximum distance. The panning animation is output by the computing device to travel the calculated distance. | 07-11-2013 |
20130181996 | VISUAL CONNECTIVITY OF WIDGETS USING EVENT PROPAGATION - A method, system and computer program product receive a set of objects for connection, create a moving object within the set of objects, display visual connection cues on objects in the set of objects, adjust the visual connection cues of the moving object and a target object in the set of objects, identify event propagation precedence, and connect the moving object with the target object. | 07-18-2013 |
20130229418 | CUSTOM ANIMATION APPLICATION TOOLS AND TECHNIQUES - A machine-controlled method can include an electronic device display visually presenting to a user a digital character, multiple vector cutters positioned over corresponding portions of the digital character, and at least one joint option feature positioned within overlapping sub-portions of at least two vector cutters. The method can also include the display visually presenting a movement of the digital character based on the vector cutters and joint option feature. | 09-05-2013 |
20130235046 | METHOD AND SYSTEM FOR CREATING ANIMATION WITH CONTEXTUAL RIGGING - There is described a method for applying a control rig to an animation of a character, the method comprising: receiving a state change for the character being in a first state; determining a second state for the character using the state change; retrieving an animation clip and a control rig both corresponding to the second state, the animation clip comprising a plurality of poses for the character each defining a configuration for a body of the character, the control rig being specific to the second state and corresponding to at least one constraint to be applied on the body of the character; applying the control rig to the animation clip, thereby obtaining a rigged animation clip; and outputting the rigged animation clip. | 09-12-2013 |
20130235047 | METHOD FOR ANIMATING CHARACTERS, WITH COLLISION AVOIDANCE BASED ON TRACING INFORMATION - A method for determining a moving direction or moving velocity for a character in a group comprises reading tracing information from a cell in a terrain map on which the character is located, determining if collision avoidance is needed, and if a collision avoiding manoeuvre is necessary then updating the tracing information in the current terrain cell. | 09-12-2013 |
20130265316 | USER INTERFACE FOR CONTROLLING THREE-DIMENSIONAL ANIMATION OF AN OBJECT - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. The control area includes an ellipse. The user-manipulable control element includes a three-dimensional arrow with a straight body, a three-dimensional arrow with a curved body, or a sphere. In one embodiment, the interface includes a virtual trackball that is used to manipulate the user-manipulable control element. | 10-10-2013 |
20130300750 | METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR GENERATING ANIMATED IMAGES - In accordance with an example embodiment a method, apparatus and computer program product are provided. The method comprises facilitating a selection of a region in a multimedia frame and performing an alignment of multimedia frames occurring periodically at a pre-defined interval in a capture order associated with a plurality of multimedia frames based on the multimedia frame comprising the selected region. The method further comprises computing region-match parameters corresponding to the selected region for the aligned multimedia frames. One or more multimedia frames are selected from among the aligned multimedia frames based on the computed region-match parameters and a multimedia frame is identified from among the selected one or more multimedia frames and multimedia frames neighbouring the one or more selected multimedia frames based on the computed region-match parameters. The multimedia frame is identified for configuring a loop sequence for an animated image. | 11-14-2013 |
20130300751 | METHOD FOR GENERATING MOTION SYNTHESIS DATA AND DEVICE FOR GENERATING MOTION SYNTHESIS DATA - A method for generating motion synthesis data from two recorded motion clips comprises transforming the motion frames to standard coordinates, separating HF motion data of the motion frames from LF motion data, determining from different motion clips at least two motion frames whose frame distance is below a threshold, and defining a transition point between the at least two motion frames, interpolating motion data between said determined motion frames separately for HF and LF motion data, and generating a motion path from three segments: one segment is transformed motion data from a first motion clip up to the transition point, one segment is the interpolated motion data, and one segment is transformed motion data from a second motion clip, starting from the transition point. | 11-14-2013 |
20140002464 | SUPPORT AND COMPLEMENT DEVICE, SUPPORT AND COMPLEMENT METHOD, AND RECORDING MEDIUM | 01-02-2014 |
20140035933 | OBJECT DISPLAY DEVICE - A polygon of a skirt is beforehand set in association with a skirt bone such that the polygon is at an angle of β to the skirt bone. The movement of the skirt bone is controlled according to the movement of a thigh bone. When a character walks or runs, if the thigh bone is inclined at an angle of α, each of the skirt bone and the polygon is inclined at an angle of α in a direction perpendicular to the ground surface. Therefore, since a polygon and the polygon do not interest each other, it is possible to prevent a thigh portion of the character from penetrating the skirt unnaturally. | 02-06-2014 |
20140035934 | Avatar Facial Expression Techniques - A method and apparatus for capturing and representing 3D wire-frame, color and shading of facial expressions are provided, wherein the method includes the following steps: storing a plurality of feature data sequences, each of the feature data sequences corresponding to one of the plurality of facial expressions; and retrieving one of the feature data sequences based on user facial feature data; and mapping the retrieved feature data sequence to an avatar face. The method may advantageously provide improvements in execution speed and communications bandwidth. | 02-06-2014 |
20140085315 | COMBINING SHAPES FOR ANIMATION - A system includes a computing device that includes a memory for storing instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes combining, in a nonlinear manner, a first set of vertex displacements that represent the difference between a first animated expression and a neutral animated expression with a second set of vertex displacements that represent the difference between a second animated expression and the neutral animated expression. The number of vertices associated with the first set of vertex displacements of the first animated expression is equivalent to the number of vertices associated with the second set of vertex displacements of the second animated expression. | 03-27-2014 |
20140198108 | MULTI-LINEAR DYNAMIC HAIR OR CLOTHING MODEL WITH EFFICIENT COLLISION HANDLING - Systems and method for modeling hair in real-time with user interactive controls are presented. One embodiment may take the form of a method of hair motion modeling including representing hair with hair guides, each hair guide comprising a plurality of hair points and reducing a dimensionality of the hair guides to achieve a reduced sub-space. Additionally, the method includes generating a data tensor for multiple factors related to the hair guides and decomposing the tensor to create a model characterizing the multiple factors in a multi-linear hair framework. The hair may be hair, such as human hair, animal fur, or clothing fibers. | 07-17-2014 |
20140210831 | COMPUTER GENERATED HEAD - A method of animating a computer generation of a head, the head having a mouth which moves in accordance with speech to be output by the head,
| 07-31-2014 |
20140267312 | METHOD AND SYSTEM FOR DIRECTLY MANIPULATING THE CONSTRAINED MODEL OF A COMPUTER-GENERATED CHARACTER - A rail manipulator indicates the possible range(s) of movement of a part of a computer-generated character in a computer animation system. The rail manipulator obtains a model of the computer-generated character. The model may be a skeleton structure of bones connected at joints. The interconnected bones may constrain the movements of one another. When an artist selects one of the bones for movement, the rail manipulator determines the range of movement of the selected bone. The determination may be based on the position and/or the ranges of movements of other bones in the skeleton structure. The range of movement is displayed on-screen to the artist, together with the computer-generated character. In this way, the rail manipulator directly communicates to the artist the degree to which a portion of the computer-generated character can be moved, in response to the artist's selection of the portion of the computer-generated character. | 09-18-2014 |
20140267313 | GENERATING INSTRUCTIONS FOR NONVERBAL MOVEMENTS OF A VIRTUAL CHARACTER - Programs for creating a set of behaviors for lip sync movements and nonverbal communication may include analyzing a character's speaking behavior through the use of acoustic, syntactic, semantic, pragmatic, and rhetorical analyses of the utterance. For example, a non-transitory, tangible, computer-readable storage medium may contain a program of instructions that cause a computer system running the program of instructions to: receive a text specifying words to be spoken by a virtual character; extract metaphoric elements, discourse elements, or both from the text; generate one or more mental state indicators based on the metaphoric elements, the discourse elements, or both; map each of the one or more mental state indicators to a behavior that the virtual character should display with nonverbal movements that convey the mental state indicators; and generate a set of instructions for the nonverbal movements based on the behaviors. | 09-18-2014 |
20140285496 | Data Compression for Real-Time Streaming of Deformable 3D Models for 3D Animation - Systems and methods are described for performing spatial and temporal compression of deformable mesh based representations of 3D character motion allowing the visualization of high-resolution 3D character animations in real time. In a number of embodiments, the deformable mesh based representation of the 3D character motion is used to automatically generate an interconnected graph based representation of the same 3D character motion. The interconnected graph based representation can include an interconnected graph that is used to drive mesh clusters during the rendering of a 3D character animation. The interconnected graph based representation provides spatial compression of the deformable mesh based representation, and further compression can be achieved by applying temporal compression processes to the time-varying behavior of the mesh clusters. Even greater compression can be achieved by eliminating redundant data from the file format containing the interconnected graph based representation of the 3D character motion that would otherwise be repeatedly provided to a game engine during rendering, and by applying loss-less data compression to the data of the file itself. | 09-25-2014 |
20140292770 | DISPLAY WITH ROBOTIC PIXELS - Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination. | 10-02-2014 |
20140313207 | Interactive design,synthesis and delivery of 3D motion data through the web - Systems and methods are described for animating 3D characters using synthetic motion data generated by generative models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, an animation system is accessible via a server system that utilizes the ability of generative models to generate synthetic motion data across a continuum to enable multiple animators to effectively reuse the same set of previously recorded motion capture data to produce a wide variety of desired animation sequences. One embodiment of the invention includes a server system configured to communicate with a database containing motion data including repeated sequences of motion, where the differences between the repeated sequences of motion are described using at least one high level characteristic. In addition, the server system is configured to train a generative model using the motion data, the server system is configured to generate a user interface that is accessible via a communication network, to receive a high level description of a desired sequence of motion via the user interface, to use the generative model to generate synthetic motion data based on the high level description of the desired sequence of motion and to transmit a stream via the communication network including information that can be used to display a 3D character animated using the synthetic motion data. | 10-23-2014 |
20140313208 | EMOTIVE ENGINE AND METHOD FOR GENERATING A SIMULATED EMOTION FOR AN INFORMATION SYSTEM - Information about a device may be emotively conveyed to a user of the device. Input indicative of an operating state of the device may be received. The input may be transformed into data representing a simulated emotional state. Data representing an avatar that expresses the simulated emotional state may be generated and displayed. A query from the user regarding the simulated emotional state expressed by the avatar may be received. The query may be responded to. | 10-23-2014 |
20140320507 | USER TERMINAL DEVICE FOR PROVIDING ANIMATION EFFECT AND DISPLAY METHOD THEREOF - A user terminal device includes a display which displays a screen including an object drawn by a user, a sensor which senses user manipulation, and a controller which provides animation effects regarding the objects when a preset event occurs, and performs a control operation matching the object when an object is selected by user manipulation. | 10-30-2014 |
20140320508 | SYSTEMS AND METHODS FOR APPLYING ANIMATIONS OR MOTIONS TO A CHARACTER - An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions. | 10-30-2014 |
20140368512 | SYSTEMS, METHODS, AND DEVICES FOR ANIMATION ON TILED DISPLAYS - A display system is disclosed for animation of media objects on tiled displays. The display system can include a plurality of discrete display nodes and a control module configured to determine a graphical representation of a current state of a media object. The control module can be configured to determine a graphical representation of a future state of the media object. The control module can also be configured to determine a path area on the display nodes comprising a plurality of graphical representations of the media object during a change from the current state to the future state. The control module also can be configured to cause the display nodes overlapping with at least a portion of the path area to prepare to display the media object. | 12-18-2014 |
20150029197 | Systems and Methods for Visually Creating and Editing Scrolling Actions - Systems and methods for visually creating scroll-triggered animation in a document. Based on input received, a key position is determined that is associated with an element that is to be animated. An indicator may be displayed to visually show the location of the key position on an editing canvas. A scroll-triggered animation is defined for the element based on the specified key position. The scroll-triggered animation defines attributes of the element during scroll of the document in the end use environment. For example, the animation may specify that the element has a particular location when the scroll is at the specified key position. The scroll-triggered animation may additionally or alternatively comprise a before-effect and an after-effect, performing one animation before the scroll reaches the key position and another animation after the scroll reaches the key position. | 01-29-2015 |
20150029198 | MOTION CONTROL OF ACTIVE DEFORMABLE OBJECTS - Techniques are proposed for animating a deformable object. A geometric mesh comprising a plurality of vertices is retrieved, where the geometric mesh is related to a first rest state configuration corresponding to the deformable object. A motion goal associated with the deformable object is then retrieved. The motion goal is translated into a function of one or more state variables associated with the deformable object. A second rest state configuration corresponding to the deformable object is computed by adjusting the position of at least one vertex in the plurality of vertices based at least in part on the function. | 01-29-2015 |
20150042663 | SYSTEM AND METHOD FOR CREATING AVATARS OR ANIMATED SEQUENCES USING HUMAN BODY FEATURES EXTRACTED FROM A STILL IMAGE - A user may create an avatar and/or animated sequence illustrating a particular object or living being performing a certain activity, using images of portions of the object or living being extracted from a still image or set of still images of the object or living being. A mathematical model used to represent the avatar may be animated according to user-selected motion information and may be modified according to various parameters including explicit end-user adjustments and information representative of a human emotion, mood, or feeling that may be derived from an image of the user or information from a news source or social network. | 02-12-2015 |
20150145870 | ANIMATING SKETCHES VIA KINETIC TEXTURES - A sketch-based interface within an animation engine provides an end-user with tools for creating emitter textures and oscillator textures. The end-user may create an emitter texture by sketching one or more patch elements and then sketching an emitter. The animation engine animates the sketch by generating a stream of patch elements that emanate from the emitter. The end-user may create an oscillator texture by sketching a patch that includes one or more patch elements, and then sketching a brush skeleton and an oscillation skeleton. The animation engine replicates the patch along the brush skeleton, and then interpolates the replicated patches between the brush skeleton and the oscillation skeleton, thereby causing those replicated patches to periodically oscillate between the two skeletons. | 05-28-2015 |
20150356766 | FLYING EFFECTS CHOREOGRAPHY SYSTEM - A flying effects choreography system provides visualizations of flying effects within a virtual environment. The system allows choreographers to define a sequence of waypoints that identify a path of motion. A physics engine of the system may then calculate position data for a performer or other element attached to a free-swinging pendulum cable, as the performer and pendulum cable move along the path of motion. In this manner, the position data describes the motion of the performer, including the pendulum effect or swing of the performer on the pendulum cable. The position data may be used to generate one or more visualizations that show the performer's motion, including the pendulum effect. The choreographer may review the visualizations and make modifications any number of times, until a desired flying effect is produced, without having to physically implement the flying effect in the real world. | 12-10-2015 |
20150356912 | Hybrid Messaging System - Example apparatus provide a messaging and crowd coordination system. An example apparatus may include an optical display that provides first light that encodes first information that is independent of a position from which the first light is viewed. The example apparatus may include an optical projector that projects second light that encodes second information that is dependent on a position from which the second light is viewed. The example apparatus may also include a circuit that coordinates simultaneous presentation of the first light and the second light to produce a hybrid real-time message. The message facilitates coordinating independent actions of members of a plurality of people located at different viewing positions. The independent actions may be selected from a set of actions described by the first information and prescribed by the second information. | 12-10-2015 |
20150358543 | MODULAR MOTION CAPTURE SYSTEM - A motion capture apparatus includes a base unit and a motion capture accessory such as a finger IMU assembly and/or a joystick. The base unit includes an inertial measurement unit (IMU), a microprocessor in data communication with the IMU, and a plurality of IMU connectors connected to the microprocessor. The base unit further includes a communications module adapted for wired communications, a transceiver for wireless communications, and an accessory connector receptacle for mechanical connection of the motion capture accessory. The finger IMU assembly includes a housing base and a housing cap, an IMU in a void formed between the base and cap, and a flexible cable assembly electrically connected to the IMU in the finger housing assembly. The cable assembly slidably engages the housing base and includes a connector plug adapted for connection to any one of the plurality of IMU connectors on the base unit. | 12-10-2015 |
20150371425 | ELECTRONIC DEVICE AND METHOD OF PROVIDING HANDWRITING ANIMATION - An electronic device for and a method of providing handwriting animation are provided. The electronic device includes an input and output interface configured to receive a text selection signal; and a management module configured to use vector data for stroke data on text to generate at least one piece of masking data, mask the stroke data with the masking data, and sequentially remove the masking data. The method includes checking vector data for stroke data on selected text; generating at least one piece of masking data from the vector data; masking the stroke data with the masking data; and sequentially removing the masking data. | 12-24-2015 |
20150379753 | MOVEMENT PROCESSING APPARATUS, MOVEMENT PROCESSING METHOD, AND COMPUTER-READABLE MEDIUM - In order to allow main parts of a face to move more naturally, a movement processing apparatus includes a face main part detection unit configured to detect a main part forming a face from an acquired face image, a shape specifying unit configured to specify a shape type of the detected main part, and a movement condition setting unit configured to set a control condition for moving the main part, based on the specified shape type of the main part. | 12-31-2015 |
20150379754 | IMAGE PROCESSING APPARATUS, ANIMATION CREATION METHOD, AND COMPUTER-READABLE MEDIUM - According to an image processing apparatus, a control unit creates an animation that sets initial positions of parts of an image, such as the head, lips, and eyelids, and moves the parts of the image with positions of the parts of the image in the first frame as their set initial positions, and moves the parts of the image in such a manner as that positions of the parts of the image in the last frame coincide with the set initial positions. | 12-31-2015 |
20160005206 | AVATAR ANIMATION, SOCIAL NETWORKING AND TOUCH SCREEN APPLICATIONS - Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition. | 01-07-2016 |
20160019434 | GENERATING AND USING A PREDICTIVE VIRTUAL PERSONFICATION - A system for generating a predictive virtual personification includes receiving, from an AV data source, a data store, and a saliency recognition engine, wherein the AV data source is configured to transmit one or more AV data sets to the saliency recognition engine, each AV data set includes a graphical representation of a donor subject, and the saliency recognition engine is configured to receive the AV data set and one or more identified trigger stimulus events, locate a set of saliency regions of interest (SROI) within the graphical representation of the donor subject, generate a set of SROI specific saliency maps and store, in the data store, a set of correlated SROI specific saliency maps generated by correlating each SROI specific saliency map a corresponding trigger event. | 01-21-2016 |
20160048994 | Method and system for making natural movement in displayed 3D environment - Techniques for rendering the motions of a selected object as naturally as possible in a 3D environment are disclosed. According to one aspect of the techniques, relative changes in position of a controller in the physical world are used to control the motion of a selected (target) object in a virtual world by imparting inertia into the selected object in a relationship to the changes in speed and duration of the controller. As a result, the movements of the object are rendered naturally in a displayed scene in accordance with the changes in motion or position of the controller. | 02-18-2016 |
20160048995 | WEATHER DISPLAYING METHOD AND DEVICE - The present disclosure discloses a method and a device for displaying weather. The method includes: acquiring weather information and orientation information of a terminal device; generating a weather animation according to the weather information and the orientation information of the terminal device; and displaying the weather animation on the terminal device. Accordingly, a plurality of weather conditions are comprehensively and dynamically presented via integrated motion pictures, and the weather conditions are displayed more accurately, realistically, and intuitively. | 02-18-2016 |
20160071301 | Method for Scripting Inter-scene Transitions - A method for authoring and displaying a virtual tour of a three-dimensional space which employs transitional effects simulating motion. An authoring tool is provided for interactively defining a series of locations in the space for which two-dimensional images, e.g., panoramas, photographs, etc., are available. A user identifies one or more view directions for a first-person perspective viewer for each location. For pairs of locations in the series, transitional effects are identified to simulate smooth motion between the pair of locations. The authoring tool stores data corresponding to the locations, view directions and transitional effects for playback on a display. When the stored data is accessed, a virtual tour of the space is created that includes transitional effects simulating motion between locations. The virtual tour created can allow a viewer to experience the three-dimensional space in a realistic manner. | 03-10-2016 |
20160078635 | AVATAR MOTION MODIFICATION - A method, system and computer program for modifying avatar motion. The method includes receiving an input motion, determining an input motion model for the input motion sequence, and modifying an avatar motion model associated with the stored avatar to approximate the input motion model for the input motion sequence when the avatar motion model does not approximate the input motion model. The stored avatar is presented after the avatar motion model associated with the stored avatar is modified to approximate the input motion model for the input motion sequence. | 03-17-2016 |
20160086365 | SYSTEMS AND METHODS FOR THE CONVERSION OF IMAGES INTO PERSONALIZED ANIMATIONS - Systems and methods for converting an image into an animated image or video, including: an algorithm for receiving the image from a user via an electronic device; an algorithm for applying a selected template to the image, wherein the selected template imparts selected portions of the image with motion or overlays selected objects on the image, thereby providing an animated image or video; and an algorithm for displaying the animated image or video to the user via the electronic device. The applying the selected template to the image is performed by software resident on the electronic device or remote from the electronic device. | 03-24-2016 |
20160086367 | AVATAR EYE CONTROL IN A MULTI-USER ANIMATION ENVIRONMENT - In a multi-participant modeled virtual reality environment, avatars are modeled beings that include moveable eyes creating the impression of an apparent gaze direction. Control of eye movement may be performed autonomously using software to select and prioritize targets in a visual field. Sequence and duration of apparent gaze may then be controlled using automatically determined priorities. Optionally, user preferences for object characteristics may be factored into determining priority of apparent gaze. Resulting modeled avatars are rendered on client displays to provide more lifelike and interesting avatar depictions with shifting gaze directions. | 03-24-2016 |
20160093086 | Vehicle Display Device - A vehicle display device is configured to be mounted on a vehicle to provide information to a person on the vehicle by displaying images on a liquid crystal display. The vehicle display device includes a storage unit in which image data including a line drawing is stored, and a display control unit configured to display a plurality of mask images on the liquid crystal display such that the mask images are set with respect to the image data stored in the storage unit to mask a plurality of regions into which the line drawing in the image data is divided, and to execute a stepwise unmasking processing to remove the mask images sequentially along a drawing direction of the line drawing. | 03-31-2016 |
20160140747 | Interactive Design System for Character Crafting - There are provided an interactive design system and method for character crafting. An example system includes a memory storing a machine software application and a processor configured to execute the machine software application to receive a plurality of components for a character, the plurality of components including at least a first component and a second component, receive a movement for the character, the movement including a first pose for the character and a second pose for the character, calculate a linkage for the first component and the second component based on the movement, and generate an updated character by connecting the second component to the first component using the linkage. The linkage may include at least one of a connector, a trimmer, and a propagation mechanism. | 05-19-2016 |
20160375340 | Sports Entertainment System for Sports Spectators - The present invention is an entertainment system for multitudes of sport's spectators to help them follow the movements of players and play objects on the play areas of sport's venues in real time during sporting events; and help with game nuances like making fouling and scoring decisions. The invention uses generic surveillance sources to capture the instantaneous real time continuous data streams of geographic coordinates of the players and their play objects during their movements, and displays the data as scaled animated tracks of the players and the play object on the screens of the spectator's personal smart mobile devices (includes smart devices, mobile devices) under the control of a mobile app. The tracks are displayed on a backdrop which is an animated rendering of the play area. The tracks and the backdrop are calculated to have the same scale to assure their alignment on the screen together. | 12-29-2016 |
20220139018 | USER EXPRESSIONS IN VIRTUAL ENVIRONMENTS - Methods, systems, and computer-readable storage media providing banner representations in a computer system that provides a virtual environment, including: accessing an avatar record, where the avatar record indicates an avatar representation that includes data to provide a visual representation of an avatar; receiving a selection of a banner representation made by a user; accessing a representation record, wherein the representation record indicates a banner representation that includes data to provide a visual representation of a banner; associating the banner representation with the avatar representation; receiving avatar movement input that indicates movement of the avatar within the virtual environment; and generating visual data representing the movement of the avatar and banner in the virtual environment using the avatar representation and the banner representation, where the banner is placed in the virtual environment following the avatar as the avatar moves in the virtual environment. | 05-05-2022 |