File size: 3,746 Bytes
ad99b22 6b4fb8a e43d7ea 6b4fb8a 34f02ca 6b4fb8a e43d7ea 6b4fb8a 34f02ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: apache-2.0
base_model:
- lllyasviel/FramePackI2V_HY
pipeline_tag: text-to-image
---
# Animetic Light
[<img src="./assets/images/main_image.webp" alt="main image" width="400">](./assets/images/main_image.webp)
[](README_ja.md)
[](README.md)
An experimental model for background generation and relighting targeting anime-style images.
This is a LoRA compatible with FramePack's 1-frame inference.
For photographic relighting, IC-Light V2 is recommended.
- [IC-Light V2 (Flux-based IC-Light models) 路 lllyasviel IC-Light 路 Discussion #98](https://github.com/lllyasviel/IC-Light/discussions/98)
- [IC-Light V2-Vary 路 lllyasviel IC-Light 路 Discussion #109](https://github.com/lllyasviel/IC-Light/discussions/109)
IC-Light V2-Vary is available on Hugging Face Spaces, while IC-Light V2 can be used via API on platforms like fal.ai.
- [fal-ai/iclight-v2](https://fal.ai/models/fal-ai/iclight-v2)
## Features
- Generates backgrounds based on prompts and performs relighting while preserving the character region.
## Generation Examples
[<img src="./assets/images/sample.webp" alt="animatic light sample" width="500">](./assets/images/sample.webp)
## How to Use
- The recommended image resolution is approximately 1024x1024 total pixels.
- Prepare an image with a simple, flat background.
Even with an image that already has a background, roughly filling that background area with a single color may still work.
[<img src="./assets/images/erased_bg_sample.webp" alt="erase bg sample" width="300">](./assets/images/erased_bg_sample.webp)
- Prompts are recommended to be created with Gemini Flash 2.5.
A prompt length of 700 characters or more (including spaces and line breaks) is recommended.
Prompts for Gemini Flash 2.5 to help with prompt generation are available [here](./prompt.txt).
By inputting an image and a brief description, a complete prompt will be output.
- Example 1: <Image of a dragon\> "A dragon flying while breathing blue fire. The background is a volcano."
- Example 2: <Image of a dragon\> "Volcano"
- Add the following tags to the beginning of your prompt:
`illustration style`, (optional: `ambient lighting`, `dim lighting`)
- Note: Unsupported image styles (e.g., figure-like) may result in noise or residual backgrounds.
### ComfyUI Workflow
Workflow file is [here](./workflows/workflow_animetic_light.json)
Sample image is [here](./sample_images/dragon.webp)
Please use [xhiroga/ComfyUI-FramePackWrapper_PlusOne](https://github.com/xhiroga/ComfyUI-FramePackWrapper_PlusOne).
For model installation instructions, please refer to the documentation for your respective inference environment.
Also, please place `animetic_light.safetensors` in the `ComfyUI/models/loras` directory.
## Resources
### FramePack
- [GitHub - lllyasviel/FramePack: Lets make video diffusion practical!](https://github.com/lllyasviel/FramePack)
### Inference Environments
- [GitHub - xhiroga/ComfyUI-FramePackWrapper_PlusOne](https://github.com/xhiroga/ComfyUI-FramePackWrapper_PlusOne)
- [GitHub - kijai/ComfyUI-FramePackWrapper](https://github.com/kijai/ComfyUI-FramePackWrapper)
- [GitHub - git-ai-code/FramePack-eichi](https://github.com/git-ai-code/FramePack-eichi)
### Training Environments
- [GitHub - kohya-ss/musubi-tuner](https://github.com/kohya-ss/musubi-tuner)
- [musubi-tuner/docs/framepack_1f.md at main 路 kohya-ss/musubi-tuner](https://github.com/kohya-ss/musubi-tuner/blob/main/docs/framepack_1f.md)
---
[](https://x.com/ippanorc)
|