Showing 20 open source projects for "text to video"

View related business solutions
  • Teradata VantageCloud Enterprise is a data analytics platform for performing advanced analytics on AWS, Azure, and Google Cloud. Icon
    Teradata VantageCloud Enterprise is a data analytics platform for performing advanced analytics on AWS, Azure, and Google Cloud.

    Power faster innovation with Teradata VantageCloud

    VantageCloud is the complete cloud analytics and data platform, delivering harmonized data and Trusted AI for all. Built for performance, flexibility, and openness, VantageCloud enables organizations to unify diverse data sources, run complex analytics, and deploy AI models—all within a single, scalable platform.
    Learn More
  • Secure your business by securing your people. Icon
    Secure your business by securing your people.

    Over 100,000 businesses trust 1Password

    Take the guesswork out of password management, shadow IT, infrastructure, and secret sharing so you can keep your people safe and your business moving.
    Learn More
  • 1
    Video Diffusion - Pytorch

    Video Diffusion - Pytorch

    Implementation of Video Diffusion Models

    ...Any new developments for text-to-video synthesis will be centralized at Imagen-pytorch. For conditioning on text, they derived text embeddings by first passing the tokenized text through BERT-large. You can also directly pass in the descriptions of the video as strings, if you plan on using BERT-base for text conditioning. This repository also contains a handy Trainer class for training on a folder of gifs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Make-A-Video - Pytorch (wip)

    Make-A-Video - Pytorch (wip)

    Implementation of Make-A-Video, new SOTA text to video generator

    Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch. They combine pseudo-3d convolutions (axial convolutions) and temporal attention and show much better temporal fusion. The pseudo-3d convolutions isn't a new concept. It has been explored before in other contexts, say for protein contact prediction as "dimensional hybrid residual networks".
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Wan2.2

    Wan2.2

    Wan2.2: Open and Advanced Large-Scale Video Generative Model

    ...The model is trained on significantly larger datasets than its predecessor, greatly enhancing motion complexity, semantic understanding, and aesthetic diversity. Wan2.2 also open-sources a 5-billion parameter high-compression VAE-based hybrid text-image-to-video (TI2V) model that supports 720P video generation at 24fps on consumer-grade GPUs like the RTX 4090. It supports multiple video generation tasks including text-to-video.
    Downloads: 131 This Week
    Last Update:
    See Project
  • 4
    Wan2.1

    Wan2.1

    Wan2.1: Open and Advanced Large-Scale Video Generative Model

    Wan2.1 is a foundational open-source large-scale video generative model developed by the Wan team, providing high-quality video generation from text and images. It employs advanced diffusion-based architectures to produce coherent, temporally consistent videos with realistic motion and visual fidelity. Wan2.1 focuses on efficient video synthesis while maintaining rich semantic and aesthetic detail, enabling applications in content creation, entertainment, and research. ...
    Downloads: 82 This Week
    Last Update:
    See Project
  • The Cloud Sales Acceleration Platform Icon
    The Cloud Sales Acceleration Platform

    For businesses wanting a platform to list, manage, and co-sell on cloud marketplaces with minimal engineering effort

    Streamline and automate your cloud sales cycle, enhance operational efficiency, and capitalize on marketplace opportunities with the Clazar Cloud Sales Acceleration Platform.
    Learn More
  • 5
    CogVideo

    CogVideo

    Text and image to video generation: CogVideoX and CogVideo

    CogVideo is an open-source family of advanced video generation models that can create videos from text, images, or existing video inputs. Built on large-scale Transformer and diffusion architectures, it enables multimodal generation across text-to-video, image-to-video, and video continuation tasks. The latest CogVideoX models offer higher resolution outputs, longer video durations, and improved controllability through prompt engineering. ...
    Downloads: 24 This Week
    Last Update:
    See Project
  • 6
    Phenaki - Pytorch

    Phenaki - Pytorch

    Implementation of Phenaki Video, which uses Mask GIT

    ...This repository will also endeavor to allow the researcher to train on text-to-image and then text-to-video. Similarly, for unconditional training, the researcher should be able to first train on images and then fine tune on video.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    LTX-2.3

    LTX-2.3

    Official Python inference and LoRA trainer package

    LTX-2.3 is an open-source multimodal artificial intelligence foundation model developed by Lightricks for generating synchronized video and audio from prompts or other inputs. Unlike most earlier video generation systems that only produced silent clips, LTX-2 combines video and audio generation in a unified architecture capable of producing coherent audiovisual scenes. The model uses a diffusion-transformer-based architecture designed to generate high-fidelity visual frames while...
    Downloads: 189 This Week
    Last Update:
    See Project
  • 8
    Open-Sora

    Open-Sora

    Open-Sora: Democratizing Efficient Video Production for All

    Open-Sora is an open-source initiative aimed at democratizing high-quality video production. It offers a user-friendly platform that simplifies the complexities of video generation, making advanced video techniques accessible to everyone. The project embraces open-source principles, fostering creativity and innovation in content creation. Open-Sora provides tools, models, and resources to create high-quality videos, aiming to lower the entry barrier for video production and support diverse...
    Downloads: 24 This Week
    Last Update:
    See Project
  • 9
    HunyuanCustom

    HunyuanCustom

    Multimodal-Driven Architecture for Customized Video Generation

    HunyuanCustom is a multimodal video customization framework by Tencent Hunyuan, aimed at generating customized videos featuring particular subjects (people, characters) under flexible conditions, while maintaining subject/identity consistency. It supports conditioning via image, audio, video, and text, and can perform subject replacement in videos, generate avatars speaking given audio, or combine multiple subject images.
    Downloads: 0 This Week
    Last Update:
    See Project
  • World class QA, 100% done-for-you Icon
    World class QA, 100% done-for-you

    For engineering teams in search of a solution to design, manage and maintain E2E tests for their apps

    MuukTest is a test automation service that combines our own proprietary, AI-powered software with expert QA services to help you achieve world class test automation at a fraction of the in-house costs.
    Learn More
  • 10
    OpenMontage

    OpenMontage

    World's first open-source, agentic video production system

    OpenMontage is an open-source, agent-driven video production system that transforms AI coding assistants into fully automated multimedia creation pipelines. Instead of focusing on a single capability such as text-to-video generation, it treats video production as a structured, multi-stage workflow that mirrors how a real production team operates, including research, scripting, asset generation, editing, and final rendering.
    Downloads: 53 This Week
    Last Update:
    See Project
  • 11
    HunyuanVideo

    HunyuanVideo

    HunyuanVideo: A Systematic Framework For Large Video Generation Model

    HunyuanVideo is a cutting-edge framework designed for large-scale video generation, leveraging advanced AI techniques to synthesize videos from various inputs. It is implemented in PyTorch, providing pre-trained model weights and inference code for efficient deployment. The framework aims to push the boundaries of video generation quality, incorporating multiple innovative approaches to improve the realism and coherence of the generated content. Release of FP8 model weights to reduce GPU...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 12
    Vidi2

    Vidi2

    Large Multimodal Models for Video Understanding and Editing

    Vidi is a family of large multimodal models developed for deep video understanding and editing tasks, integrating vision, audio, and language to allow sophisticated querying and manipulation of video content. It’s designed to process long-form, real-world videos and answer complex queries such as “when in this clip does X happen?” or “where in the frame is object Y during that moment?” — offering temporal retrieval, spatio-temporal grounding (i.e. locating objects over time + space), and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    AI YouTube Shorts Generator

    AI YouTube Shorts Generator

    A python tool that uses GPT-4, FFmpeg, and OpenCV

    AI-YouTube-Shorts-Generator is a Python-based tool that automates the creation of short-form vertical video clips (“shorts”) from longer source videos — ideal for adapting content for platforms like YouTube Shorts, Instagram Reels, or TikTok. It analyzes input video (whether a local file or a YouTube URL), transcribes audio (with optional GPU-accelerated speech-to-text), uses an AI model to identify the most compelling or engaging segments, and then crops/resizes the video and applies subtitle overlays, producing a polished short video without manual editing. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 14
    VideoCrafter2

    VideoCrafter2

    Overcoming Data Limitations for High-Quality Video Diffusion Models

    VideoCrafter is an open-source video generation and editing toolbox designed to create high-quality video content. It features models for both text-to-video and image-to-video generation. The system is optimized for generating videos from textual descriptions or still images, leveraging advanced diffusion models. VideoCrafter2, an upgraded version, improves on its predecessor by enhancing motion dynamics and concept combinations, especially in low-data scenarios. ...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 15
    BWR Ai watermark remover

    BWR Ai watermark remover

    AI-powered tool to quickly remove watermarks from videos flawlessly

    Blue Wave Remover is an advanced AI-driven video watermark removal software designed to effortlessly eliminate logos, text, timestamps, and watermarks from video content. Utilizing cutting-edge computer vision and generative AI algorithms, it accurately detects and removes both static and moving watermarks while preserving the original video's quality, colors, and clarity. The program supports popular video formats and offers batch processing for fast and efficient removal on multiple files. ...
    Downloads: 17 This Week
    Last Update:
    See Project
  • 16
    Aphantasia

    Aphantasia

    CLIP + FFT/DWT/RGB = text to image/video

    This is a collection of text-to-image tools, evolved from the artwork of the same name. Based on CLIP model and Lucent library, with FFT/DWT/RGB parameterizes (no-GAN generation). Illustrip (text-to-video with motion and depth) is added. DWT (wavelets) parameterization is added. Check also colabs below, with VQGAN and SIREN+FFM generators. Tested on Python 3.7 with PyTorch 1.7.1 or 1.8.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    StoryTeller

    StoryTeller

    Multimodal AI Story Teller, built with Stable Diffusion, GPT, etc.

    A multimodal AI story teller, built with Stable Diffusion, GPT, and neural text-to-speech (TTS). Given a prompt as an opening line of a story, GPT writes the rest of the plot; Stable Diffusion draws an image for each sentence; a TTS model narrates each line, resulting in a fully animated video of a short story, replete with audio and visuals. To develop locally, install dev dependencies and install pre-commit hooks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Amiga Memories

    Amiga Memories

    A walk along memory lane

    ...The spoken text will be read by a voice synthesizer (Text To Speech or TTS), the written text is simply drawn on the image as subtitles. Here, in addition to the spoken & written narration, the script controls the camera movements as well as the LED activity of the computer. Amiga Memories' video images are computed by the GameStart 3D engine (pre-HARFANG 3D).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    NÜWA - Pytorch

    NÜWA - Pytorch

    Implementation of NÜWA, attention network for text to video synthesis

    Implementation of NÜWA, state of the art attention network for text-to-video synthesis, in Pytorch. It also contains an extension into video and audio generation, using a dual decoder approach. It seems as though a diffusion-based method has taken the new throne for SOTA. However, I will continue on with NUWA, extending it to use multi-headed codes + hierarchical causal transformer. I think that direction is untapped for improving on this line of work.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Text2Video

    Text2Video

    Software tool that converts text to video for more engaging experience

    Text2Video is a software tool that converts text to video for more engaging learning experience. I started this project because during this semester, I have been given many reading assignments and I felt frustration in reading long text. For me, it was very time and energy-consuming to learn something through reading. So I imagined, "What if there was a tool that turns text into something more engaging such as a video, wouldn't it improve my learning experience?" ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB