Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

125 results about "Animation system" patented technology

The Computer Animation Production System (CAPS) was a digital ink and paint system used in animated feature films, the first at a major studio, designed to replace the expensive process of transferring animated drawings to cels using India ink or xerographic technology, and painting the reverse sides of the cels with gouache paint.

Sytem and a Method for Motion Tracking Using a Calibration Unit

The invention relates to motion tracking system (10) for tracking a movement of an object (P) in a three-dimensional space, the said object being composed of object portions having individual dimensions and mutual proportions and being sequentially interconnected by joints the system comprising orientation measurement units (S1, S3, . . . SN) for measuring data related to at least orientation of the object portions, wherein the orientation measurement units are arranged in positional and orientational relationships with respective object portions and having at least orientational parameters; a processor (3, 5) for receiving data from the orientation measurement units, the said processor comprising a module for deriving orientation and/or position information of the object portions using the received data and a calibration unit (7) arranged to calculate calibration values based on received data and pre-determined constraints for determining at least the mutual proportions of the object portions and orientational parameters of the orientation measurement units based on received data, pre-determined constrains and additional input data. The invention further relates to a method for tracking a movement of an object, a medical rehabilitation system and an animation system.
Owner:XSENS HLDG BV

System and method for tracking facial muscle and eye motion for computer graphics animation

A motion tracking system enables faithful capture of subtle facial and eye motion using a surface electromyography (EMG) detection method to detect muscle movements and an electrooculogram (EOG) detection method to detect eye movements. Signals corresponding to the detected muscle and eye movements are used to control an animated character to exhibit the same movements performed by a performer. An embodiment of the motion tracking animation system comprises a plurality of pairs of EMG electrodes adapted to be affixed to a skin surface of a performer at plural locations corresponding to respective muscles, and a processor operatively coupled to the plurality of pairs of EMG electrodes. The processor includes programming instructions to perform the functions of acquiring EMG data from the plurality of pairs of EMG electrodes. The EMG data comprises electrical signals corresponding to muscle movements of the performer during a performance. The programming instructions further include processing the EMG data to provide a digital model of the muscle movements, and mapping the digital model onto an animated character. In another embodiment of the invention, a plurality of pairs of EOG electrodes are adapted to be affixed to the skin surface of the performer at locations adjacent to the performer's eyes. The processor is operatively coupled to the plurality of pairs of EOG electrodes and further includes programming instructions to perform the functions of acquiring EOG data from the plurality of pairs of EOG electrodes. The EOG data comprises electrical signals corresponding to eye movements of the performer during a performance. The programming instructions further provide processing of the EOG data and mapping of the processed EOG data onto the animated character. As a result, the animated character will exhibit the same muscle and eye movements as the performer.
Owner:SONY CORP +1

Interactive design, synthesis and delivery of 3D character motion data through the web

Systems and methods are described for animating 3D characters using synthetic motion data generated by generative models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, an animation system is accessible via a server system that utilizes the ability of generative models to generate synthetic motion data across a continuum to enable multiple animators to effectively reuse the same set of previously recorded motion capture data to produce a wide variety of desired animation sequences. In several embodiments, an animator can upload a custom model of a 3D character and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character. One embodiment of the invention includes a server system configured to communicate with a database containing motion data including repeated sequences of motion, where the differences between the repeated sequences of motion are described using at least one high level characteristic. In addition, the server system is connected to a communication network, the server system is configured to train a generative model using the motion data, the server system is configured to generate a user interface that is accessible via the communication network, the server system is configured to receive a high level description of a desired sequence of motion via the user interface, the server system is configured to use the generative model to generate synthetic motion data based on the high level description of the desired sequence of motion, and wherein the server system is configured to transmit a stream via the communication network including information that can be used to display a 3D character animated using the synthetic motion data.
Owner:ADOBE INC

Collaborative filtering-based real-time voice-driven human face and lip synchronous animation system

The invention discloses a collaborative filtering-based real-time voice-driven human face and lip synchronous animation system. By inputting voice in real time, a human head model makes lip animation synchronous with the input voice. The system comprises an audio/video coding module, a collaborative filtering module, and an animation module; the module respectively performs Mel frequency cepstrum parameter coding and human face animation parameter coding in the standard of Moving Picture Experts Group (MPEG-4) on the acquired voice and human face three-dimensional characteristic point motion information to obtain a Mel frequency cepstrum parameter and human face animation parameter multimodal synchronous library; the collaborative filtering module solves a human face animation parameter synchronous with the voice by combining Mel frequency cepstrum parameter coding of the newly input voice and the Mel frequency cepstrum parameter and human face animation parameter multimodal synchronous library through collaborative filtering; and the animation module carries out animation by driving the human face model through the human face animation parameter. The system has the advantages of better sense of reality, real-time and wider application environment.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products