File size: 3,282 Bytes
9f89c4e
 
 
 
703733e
9f89c4e
 
 
 
 
 
 
5e8c65d
9f89c4e
 
 
 
 
 
 
 
 
 
c2b7c41
ed93345
e08783c
ed93345
c2b7c41
ed93345
c2b7c41
ed93345
c2b7c41
ed93345
c2b7c41
ed93345
2c9e5f5
703733e
9f89c4e
2c9e5f5
9f89c4e
 
 
2c9e5f5
9f89c4e
 
e08783c
 
 
 
 
 
 
 
 
9f89c4e
 
 
6afed56
 
2c9e5f5
 
6afed56
2c9e5f5
 
6afed56
 
 
 
 
 
 
 
 
 
 
 
9f89c4e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: cc-by-nc-4.0
task_categories:
- text-to-video
- text-to-image
language:
- en
pretty_name: VideoGrain-dataset
source_datasets:
  - original
tags:
- video editing
- Multi grained Video Editing 
- text-to-video
- Pika
- video generation
- Video Generative Model Evaluation
- Text-to-Video Diffusion Model Development
- Text-to-Video Prompt Engineering
- Efficient Video Generation

---

# VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing (ICLR 2025)

[Github](https://github.com/knightyxp/VideoGrain) (⭐ Star our GitHub )

[Project Page](https://knightyxp.github.io/VideoGrain_project_page)

[ArXiv](https://arxiv.org/abs/2502.17258)

[Youtube Video](https://www.youtube.com/watch?v=XEM4Pex7F9E)

[HuggingFace Daily Papers Top1](https://huggingface.co/papers/2502.17258)

If you think this dataset is helpful, please feel free to leave a star⭐️⭐️⭐️ and cite our paper:

<p align="center">
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/6486df66373f79a52913e017/ZQnogrOMFhy1mcTuxSQ62.mp4"></video>
</p>

# Summary
This is the dataset proposed in our paper [VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing](https://arxiv.org/abs/2502.17258) (ICLR 2025).

VideoGrain is a zero-shot method for class-level, instance-level, and part-level video editing.
- **Multi-grained Video Editing**
  - class-level: Editing objects within the same class (previous SOTA limited to this level)
  - instance-level: Editing each individual instance to distinct object
  - part-level: Adding new objects or modifying existing attributes at the part-level
- **Training-Free** 
  - Does not require any training/fine-tuning
- **One-Prompt Multi-region Control & Deep investigations about cross/self attn** 
  - modulating cross-attn for multi-regions control (visualizations available)
  - modulating self-attn for feature decoupling (clustering are available)

# Directory
```
data/
├── 2_cars
│   ├── 2_cars            # original videos frames
│   └── layout_masks      # layout masks subfolders (e.g., bg, left, right)
├── 2_cats
│   ├── 2_cats           
│   └── layout_masks      
├── 2_monkeys
├── badminton
├── boxer-punching
├── car
├── cat_flower
├── man_text_message
├── run_two_man
├── soap-box
├── spin-ball
├── tennis
└── wolf

```

# Download 

### Automatical
Install the [datasets](https://huggingface.co/docs/datasets/v1.15.1/installation.html) library first, by:
```
pip install datasets
```
Then it can be downloaded automatically with
```python
import numpy as np
from datasets import load_dataset

dataset = load_dataset("XiangpengYang/VideoGrain-dataset")
```

# License

This dataset are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en). 


# Citation
```
@article{yang2025videograin,
  title={VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing},
  author={Yang, Xiangpeng and Zhu, Linchao and Fan, Hehe and Yang, Yi},
  journal={arXiv preprint arXiv:2502.17258},
  year={2025}
}
```

# Contact

If you have any questions, feel free to contact Xiangpeng Yang (knightyxp@gmail.com).