ghostai1's picture
Update GhostPackDemo/readme.md
e76a344 verified
# πŸ‘»πŸš§ GhostPackDemo (BETA) – Next-Gen AI Video Generator
**Beta Demo β€” Try experimental AI video creation, directly on your GPU!**
[![Python](https://img.shields.io/badge/python-3.8%2B-blue?logo=python)](https://python.org)
[![License](https://img.shields.io/badge/license-Apache%202.0-green)](LICENSE)
[![Live Demo](https://img.shields.io/badge/πŸ€—%20Spaces-Live%20Demo-orange)](https://huggingface.co/spaces/ghostai1/GhostPackDemo)
<br>
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6421b1c68adc8881b974a89d/2RH49llUgKsmWY7Hu8yBD.gif"
alt="GhostPack Animated Banner"
style="width: 920px; height: 260px; max-width: 100%; border-radius: 18px; box-shadow: 0 0 48px #00ffcc; margin-bottom: 36px; display: block;">
</div>
---
## πŸš€ What is GhostPackDemo? (Beta)
**GhostPackDemo** is an open-source AI video generator for your local GPU. This BETA combines **HunyuanVideo** and **FramePack** into a single Python pipeline + live Gradio UI.
- 🦾 **Fast:** 40% faster with Teacache, Sage-Attention, CUDA tweaks.
- 🧠 **Efficient:** 50% less VRAM (context packing, tcmalloc, memory cache).
- πŸ’» **Laptop-Ready:** 6GB VRAM & 8GB RAM minimum (GTX 1650/RTX 3050+).
- πŸ”’ **Runs Local:** All processing on your own hardware β€” no cloud!
- πŸŽ›οΈ **Live Preview:** See every frame build in real time, full workflow control.
> **Beta Notice:** Expect rough edges, limited samplers, and edge-case bugs! This demo build prioritizes clarity and reproducibility for AI devs.
---
## ✨ Features (Beta)
- **Veo 3-Level AI:** Next-frame prediction for ultra-realistic video motion.
- **Phantom Speed:** Teacache + Sage-Attention, 12–15s/frame (RTX 3060).
- **Config Control:** Batch size, frame window, prompt, CRF, see everything live.
- **Open Source:** Apache 2.0 β€” fork, remix, contribute!
- **Export:** High quality MP4 (CRF 0–100).
---
## πŸ§™ Math Sorcery & Hardware
- <span style="color:#00ffa2">**Speed:**</span> Teacache ~40% (`T_total β‰ˆ 0.6 Γ— T_base`), Sage-Attention +10%, CUDA tweaks –15% latency
- <span style="color:#a200ff">**Memory:**</span> Context packing –50% VRAM, tcmalloc –20% overhead, memory cache –25%
- <span style="color:#00c3ff">**Compute:**</span> Dynamic batching +50% throughput
- <span style="color:#ffff00">**Efficiency:**</span> Power save –30% idle, Thread tuning +15% CPU
- <span style="color:#ff5e57">**VRAM Needs:**</span>
- GTX 1650 (6GB): 18–25s/frame
- RTX 3050 (8GB): 15–20s/frame
- RTX 3060 (12GB): 10–15s/frame
- RTX 4090 (24GB): 1.5–2.5s/frame
**Minimum:** NVIDIA GPU (6GB+), 8GB RAM, 30GB+ disk, Python 3.10+, CUDA 12.6+
**Recommended:** RTX 3060 or better
---
## πŸ“‘ .env Setup (IMPORTANT)
> **You must add your Hugging Face token to `.env` to download models and samplers!**
Create a file called `.env` in the project root with:
```
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
```
- Get your token from: https://huggingface.co/settings/tokens
- If missing, model download and pipeline will fail.
---
## πŸ“Ί Demo & Screenshots
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6421b1c68adc8881b974a89d/k8pgUlg4OvdUZpbMNTcp5.gif" alt="GhostPack Demo GIF" style="width: 470px; height: auto; border-radius: 18px; box-shadow: 0 0 32px #ff00ff; margin-bottom: 28px;">
<div style="display: flex; flex-direction: row; justify-content: center; gap: 28px;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6421b1c68adc8881b974a89d/7ABE2lOA4LOUtPfh1mhxP.png" alt="Main Interface" style="width: 320px; height: auto; border-radius: 12px; box-shadow: 0 0 18px #00ffcc;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6421b1c68adc8881b974a89d/9qNVRX2eM2iCY8xQKcOwW.png" alt="Advanced Settings" style="width: 320px; height: auto; border-radius: 12px; box-shadow: 0 0 18px #00ffcc;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6421b1c68adc8881b974a89d/--fIS9ITg4-VqN22ySoa2.png" alt="Logs Display" style="width: 320px; height: auto; border-radius: 12px; box-shadow: 0 0 18px #00ffcc;">
</div>
<sub>
<strong>Main Interface</strong> &nbsp; β€’ &nbsp; <strong>Advanced Settings</strong> &nbsp; β€’ &nbsp; <strong>Logs</strong>
</sub>
</div>
> 🎬 [More Demo Clips](https://github.com/ghostai1/GhostPack/blob/main/demo_videos)
---
## βš™οΈ Installation (Beta)
> πŸ’Ύ *30GB+ free space needed. Installs models on first run.*
**Ubuntu**
```bash
git clone https://github.com/ghostai1/GhostPack
cd GhostPack
chmod +x install_ubuntu.sh
./install_ubuntu.sh
```
**Windows**
```bash
git clone https://github.com/ghostai1/GhostPack
cd GhostPack
install.bat
```
**macOS**
```bash
git clone https://github.com/ghostai1/GhostPack
cd GhostPack
chmod +x install_macos.sh
./install_macos.sh
```
---
## ⚑ Quick Start
```bash
source ~/ghostpack_venv/bin/activate
cd ~/ghostpack_venv
python ghostgradio.py --port 5666 --server 0.0.0.0
```
- πŸ–ΌοΈ Upload an image
- πŸ’¬ Enter a prompt (e.g. β€œA graceful dance movement”)
- πŸ”„ Enable Teacache, set video seconds, steps, CRF
- πŸ‘€ See live frame preview + logs
- πŸ’Ύ Export MP4 instantly
- πŸ–₯️ Monitor GPU: `nvidia-smi`
---
## πŸ—ΊοΈ Roadmap
- πŸ—£οΈ **AI Voice**: Add local voiceover/narration
- πŸ–ΌοΈ **AI Start Images**: Next-level starting frames
- πŸ•ΆοΈ **VR Support**: Immersive AI video output
- πŸ”’ **More Samplers/Models** coming soon
---
## πŸ’¬ Community & Help
- [Discord](https://discord.gg/ghostpack) – Feedback & support
- [GitHub Issues](https://github.com/ghostai1/GhostPack/issues) – Bugs & requests
- [Hugging Face Space](https://huggingface.co/spaces/ghostai1/GhostPackDemo) – Try the live demo
---
## πŸ‘¨β€πŸ’» Contributing
- Fork, branch, and PR! Beta testers wanted!
- See [CONTRIBUTING.md](https://github.com/ghostai1/GhostPack/blob/main/CONTRIBUTING.md)
---
## πŸͺͺ License
Apache 2.0
**GhostPack by [ghostai1](https://github.com/ghostai1/GhostPack) Β· [Hugging Face](https://huggingface.co/ghostai1)**
*BETA build Β· Created June 11, 2025*
---