Spaces:
Running
π»π§ GhostPackDemo (BETA) β Next-Gen AI Video Generator
Beta Demo β Try experimental AI video creation, directly on your GPU!
π What is GhostPackDemo? (Beta)
GhostPackDemo is an open-source AI video generator for your local GPU. This BETA combines HunyuanVideo and FramePack into a single Python pipeline + live Gradio UI.
- π¦Ύ Fast: 40% faster with Teacache, Sage-Attention, CUDA tweaks.
- π§ Efficient: 50% less VRAM (context packing, tcmalloc, memory cache).
- π» Laptop-Ready: 6GB VRAM & 8GB RAM minimum (GTX 1650/RTX 3050+).
- π Runs Local: All processing on your own hardware β no cloud!
- ποΈ Live Preview: See every frame build in real time, full workflow control.
Beta Notice: Expect rough edges, limited samplers, and edge-case bugs! This demo build prioritizes clarity and reproducibility for AI devs.
β¨ Features (Beta)
- Veo 3-Level AI: Next-frame prediction for ultra-realistic video motion.
- Phantom Speed: Teacache + Sage-Attention, 12β15s/frame (RTX 3060).
- Config Control: Batch size, frame window, prompt, CRF, see everything live.
- Open Source: Apache 2.0 β fork, remix, contribute!
- Export: High quality MP4 (CRF 0β100).
π§ Math Sorcery & Hardware
- Speed: Teacache ~40% (
T_total β 0.6 Γ T_base
), Sage-Attention +10%, CUDA tweaks β15% latency - Memory: Context packing β50% VRAM, tcmalloc β20% overhead, memory cache β25%
- Compute: Dynamic batching +50% throughput
- Efficiency: Power save β30% idle, Thread tuning +15% CPU
- VRAM Needs:
- GTX 1650 (6GB): 18β25s/frame
- RTX 3050 (8GB): 15β20s/frame
- RTX 3060 (12GB): 10β15s/frame
- RTX 4090 (24GB): 1.5β2.5s/frame
Minimum: NVIDIA GPU (6GB+), 8GB RAM, 30GB+ disk, Python 3.10+, CUDA 12.6+
Recommended: RTX 3060 or better
π .env Setup (IMPORTANT)
You must add your Hugging Face token to
.env
to download models and samplers!
Create a file called .env
in the project root with:
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- Get your token from: https://huggingface.co/settings/tokens
- If missing, model download and pipeline will fail.
πΊ Demo & Screenshots




π¬ More Demo Clips
βοΈ Installation (Beta)
πΎ 30GB+ free space needed. Installs models on first run.
Ubuntu
git clone https://github.com/ghostai1/GhostPack
cd GhostPack
chmod +x install_ubuntu.sh
./install_ubuntu.sh
Windows
git clone https://github.com/ghostai1/GhostPack
cd GhostPack
install.bat
macOS
git clone https://github.com/ghostai1/GhostPack
cd GhostPack
chmod +x install_macos.sh
./install_macos.sh
β‘ Quick Start
source ~/ghostpack_venv/bin/activate
cd ~/ghostpack_venv
python ghostgradio.py --port 5666 --server 0.0.0.0
- πΌοΈ Upload an image
- π¬ Enter a prompt (e.g. βA graceful dance movementβ)
- π Enable Teacache, set video seconds, steps, CRF
- π See live frame preview + logs
- πΎ Export MP4 instantly
- π₯οΈ Monitor GPU:
nvidia-smi
πΊοΈ Roadmap
- π£οΈ AI Voice: Add local voiceover/narration
- πΌοΈ AI Start Images: Next-level starting frames
- πΆοΈ VR Support: Immersive AI video output
- π More Samplers/Models coming soon
π¬ Community & Help
- Discord β Feedback & support
- GitHub Issues β Bugs & requests
- Hugging Face Space β Try the live demo
π¨βπ» Contributing
- Fork, branch, and PR! Beta testers wanted!
- See CONTRIBUTING.md
πͺͺ License
Apache 2.0
GhostPack by ghostai1 Β· Hugging Face
BETA build Β· Created June 11, 2025