---
license: cc-by-nc-4.0
task_categories:
- text-to-video
- text-to-image
language:
- en
pretty_name: VideoGrain-dataset
source_datasets:
- original
tags:
- video editing
- Multi grained Video Editing
- text-to-video
- Pika
- video generation
- Video Generative Model Evaluation
- Text-to-Video Diffusion Model Development
- Text-to-Video Prompt Engineering
- Efficient Video Generation
---
# Summary
This is the dataset proposed in our paper [**VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing
**](https://arxiv.org/abs/2502.17258) (ICLR 2025).
VideoGrain is a zero-shot method for class-level, instance-level, and part-level video editing.
# Directory
```
```
# Download
### Automatical
Install the [datasets](https://huggingface.co/docs/datasets/v1.15.1/installation.html) library first, by:
```
pip install datasets
```
Then it can be downloaded automatically with
```python
import numpy as np
from datasets import load_dataset
dataset = load_dataset("XiangpengYang/VideoGrain-dataset")
```
# License
This dataset are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
# Citation
```
@article{yang2025videograin,
title={VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing},
author={Yang, Xiangpeng and Zhu, Linchao and Fan, Hehe and Yang, Yi},
journal={arXiv preprint arXiv:2502.17258},
year={2025}
}
```
# Contact
If you have any questions, feel free to contact Xiangpeng Yang (knightyxp@gmail.com).