Spaces:
Running
Running
# FramePack - Image-to-Video Generation | |
An AI application that converts static images into dynamic videos. Upload a character image, add a motion description, and generate smooth videos! | |
## How to Use | |
1. Upload a character image | |
2. Enter a prompt describing the desired motion (e.g., "The girl dances gracefully") | |
3. Adjust video length and other optional parameters | |
4. Click the "Start Generation" button | |
5. Wait for the video to generate (the process is progressive, continuously extending the video length) | |
## Example Prompts | |
- "The girl dances gracefully, with clear movements, full of charm." | |
- "The man dances energetically, leaping mid-air with fluid arm swings and quick footwork." | |
- "A character doing some simple body movements." | |
## Technical Features | |
- Based on Hunyuan Video and FramePack architecture | |
- Supports low-memory GPU operation | |
- Can generate videos up to 120 seconds long | |
- Uses TeaCache technology to accelerate the generation process | |
## Notes | |
- Video generation is done in reverse order, with the ending motion generated before the starting motion | |
- The first use requires downloading the model (approximately 30GB), please be patient | |
- If you encounter an out-of-memory error, increase the value of "GPU inference reserved memory" | |
--- | |
Original Project: [FramePack GitHub](https://github.com/lllyasviel/FramePack) |