Viggle
Powered by JST-1, the first video-3D foundation model with actual physics understanding, starting from making any character move as you want. You can animate a static character with a text motion prompt. Viggle AI is something you've never seen before. Meme anyone, dance like a pro, star in your favorite movie scenes, and swap in your own characters, all made possible with Viggle's controllable video generation. Bring your creative scenarios to life, and share the enjoyable moments with loved ones. Upload a character image of any size, select a motion template from our library, and generate your video. Within minutes, see yourself or your friends perfectly blended into captivating scenes. For more control, upload both an image and a video to make the character mimic movements from your video, which is perfect for creating custom content. Enjoy laughs with friends and family by transforming them into meme-worthy animations.
Learn more
Wan2.2-Animate
Wan2.2 Animate is a specialized module within the Wan video generation framework designed for high-fidelity character animation and character replacement, enabling users to transform static images into dynamic videos or swap subjects within existing footage while preserving realism and motion consistency. It works by taking two primary inputs: a reference image that defines the character’s appearance and a reference video that provides motion, expressions, and scene context. Using this combination, it can animate a still character by replicating body movements, gestures, and facial expressions from the source video, or replace the original subject in a video while maintaining the original lighting, camera movement, and environment for seamless integration. It relies on advanced techniques such as spatially aligned skeleton signals and implicit facial feature extraction to accurately reproduce motion and expressions.
Learn more
CrazyTalk Animator
CrazyTalk Animator 3 (CTA3) is an animation solution that enables all levels of users to create professional animations and presentations with the least amount of effort. With CTA3, anyone can instantly bring an image, logo, or prop to life by applying bouncy elastic motion effects. For the character part, CTA3 is built with 2D character templates, vast motion libraries, a powerful 2D bone rig editor, facial puppets, and audio lip-syncing tools to give users unparalleled control when animating 2D talking characters for videos, web, games, apps, and presentations. animate 2D character. Animate 2D characters with 3D motions. Elastic and bouncy curve editing. Facial puppet and audio lip-syncing. 2D facial free-form deformation. 3D camera system and motion path and timeline editing. Motion curve and render style. Create 2D characters, 2D character rigging, and bone tools. Character templates for humans, animals, and more.
Learn more
Act-Two
Act-Two enables animation of any character by transferring movements, expressions, and speech from a driving performance video onto a static image or reference video of your character. By selecting the Gen‑4 Video model and then the Act‑Two icon in Runway’s web interface, you supply two inputs; a performance video of an actor enacting your desired scene and a character input (either a single image or a video clip), and optionally enable gesture control to map hand and body movements onto character images. Act‑Two automatically adds environmental and camera motion to still images, supports a range of angles, non‑human subjects, and artistic styles, and retains original scene dynamics when using character videos (though with facial rather than full‑body gesture mapping). Users can adjust facial expressiveness on a sliding scale to balance natural motion with character consistency, preview results in real time, and generate high‑resolution clips up to 30 seconds long.
Learn more