# πŸ’» How to Inference & Test Metrics (FID, CLIP Score, GenEval, DPG-Bench, etc...)

This ToolKit will automatically inference your model and log the metrics results onto wandb as chart for better illustration. We curerntly support:

- \[x\] [FID](https://github.com/mseitzer/pytorch-fid) & [CLIP-Score](https://github.com/openai/CLIP)
- \[x\] [GenEval](https://github.com/djghosh13/geneval)
- \[x\] [DPG-Bench](https://github.com/TencentQQGYLab/ELLA)
- \[x\] [ImageReward](https://github.com/THUDM/ImageReward/tree/main)

### 0. Install corresponding env for GenEval and DPG-Bench

Make sure you can activate the following envs:

- `conda activate geneval`([GenEval](https://github.com/djghosh13/geneval))
- `conda activate dpg`([DGB-Bench](https://github.com/TencentQQGYLab/ELLA))

### 0.1 Prepare data.

Metirc FID & CLIP-Score on [MJHQ-30K](https://huggingface.co/datasets/playgroundai/MJHQ-30K)

```python
from huggingface_hub import hf_hub_download

hf_hub_download(
  repo_id="playgroundai/MJHQ-30K",
  filename="mjhq30k_imgs.zip",
  local_dir="data/test/PG-eval-data/MJHQ-30K/",
  repo_type="dataset"
)
```

Unzip mjhq30k_imgs.zip into its per-category folder structure.

```
data/test/PG-eval-data/MJHQ-30K/imgs/
β”œβ”€β”€ animals
β”œβ”€β”€ art
β”œβ”€β”€ fashion
β”œβ”€β”€ food
β”œβ”€β”€ indoor
β”œβ”€β”€ landscape
β”œβ”€β”€ logo
β”œβ”€β”€ people
β”œβ”€β”€ plants
└── vehicles
```

### 0.2 Prepare checkpoints

```bash
huggingface-cli download  Efficient-Large-Model/Sana_1600M_1024px --repo-type model --local-dir ./output/Sana_1600M_1024px --local-dir-use-symlinks False
```

### 1. directly \[Inference and Metric\] a .pth file

```bash
# We provide four scripts for evaluating metrics:
fid_clipscore_launch=scripts/bash_run_inference_metric.sh
geneval_launch=scripts/bash_run_inference_metric_geneval.sh
dpg_launch=scripts/bash_run_inference_metric_dpg.sh
image_reward_launch=scripts/bash_run_inference_metric_imagereward.sh

# Use following format to metric your models:
# bash $correspoinding_metric_launch $your_config_file_path $your_relative_pth_file_path

# example
bash $geneval_launch \
    configs/sana_config/1024ms/Sana_1600M_img1024.yaml \
    output/Sana_1600M_1024px/checkpoints/Sana_1600M_1024px.pth
```

### 2. \[Inference and Metric\] a list of .pth files using a txt file

You can also write all your pth files of a job in one txt file, eg. [model_paths.txt](../model_paths.txt)

```bash
# Use following format to metric your models, gathering in a txt file:
# bash $correspoinding_metric_launch $your_config_file_path $your_txt_file_path_containing_pth_path

# We suggest follow the file tree structure in our project for robust experiment
# example
bash scripts/bash_run_inference_metric.sh \
    configs/sana_config/1024ms/Sana_1600M_img1024.yaml \
    asset/model_paths.txt
```

### 3. You will get the following data tree.

```
output
β”œβ”€β”€your_job_name/  (everything will be saved here)
β”‚  β”œβ”€β”€config.yaml
β”‚  β”œβ”€β”€train_log.log

β”‚  β”œβ”€β”€checkpoints    (all checkpoints)
β”‚  β”‚  β”œβ”€β”€epoch_1_step_6666.pth
β”‚  β”‚  β”œβ”€β”€epoch_1_step_8888.pth
β”‚  β”‚  β”œβ”€β”€......

β”‚  β”œβ”€β”€vis    (all visualization result dirs)
β”‚  β”‚  β”œβ”€β”€visualization_file_name
β”‚  β”‚  β”‚  β”œβ”€β”€xxxxxxx.jpg
β”‚  β”‚  β”‚  β”œβ”€β”€......
β”‚  β”‚  β”œβ”€β”€visualization_file_name2
β”‚  β”‚  β”‚  β”œβ”€β”€xxxxxxx.jpg
β”‚  β”‚  β”‚  β”œβ”€β”€......
β”‚  β”œβ”€β”€......

β”‚  β”œβ”€β”€metrics    (all metrics testing related files)
β”‚  β”‚  β”œβ”€β”€model_paths.txt  Optional(πŸ‘ˆ)(relative path of testing ckpts)
β”‚  β”‚  β”‚  β”œβ”€β”€output/your_job_name/checkpoings/epoch_1_step_6666.pth
β”‚  β”‚  β”‚  β”œβ”€β”€output/your_job_name/checkpoings/epoch_1_step_8888.pth
β”‚  β”‚  β”œβ”€β”€fid_img_paths.txt  Optional(πŸ‘ˆ)(name of testing img_dir in vis)
β”‚  β”‚  β”‚  β”œβ”€β”€visualization_file_name
β”‚  β”‚  β”‚  β”œβ”€β”€visualization_file_name2
β”‚  β”‚  β”œβ”€β”€cached_img_paths.txt  Optional(πŸ‘ˆ)
β”‚  β”‚  β”œβ”€β”€......
```