CircleRadon commited on
Commit
7e3d630
·
verified ·
1 Parent(s): 7215ac2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -9,6 +9,8 @@ pipeline_tag: visual-question-answering
9
  tags:
10
  - multimodal large language model
11
  - large video-language model
 
 
12
  ---
13
 
14
 
@@ -27,11 +29,13 @@ VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
27
 
28
  <div style="display: flex; justify-content: center; margin-top: 10px;">
29
  <a href="https://arxiv.org/pdf/2501.00599"><img src="https://img.shields.io/badge/Arxiv-2501.00599-ECA8A7" style="margin-right: 5px;"></a>
 
30
  <a href="https://github.com/DAMO-NLP-SG/VideoRefer"><img src='https://img.shields.io/badge/Github-VideoRefer-F7C97E' style="margin-right: 5px;"></a>
31
  <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA3"><img src='https://img.shields.io/badge/Github-VideoLLaMA3-9DC3E6' style="margin-right: 5px;"></a>
32
  </div>
33
 
34
  ## 📰 News
 
35
  * **[2025.6.18]** 🔥We release a new version of VideoRefer([VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) and [VideoRefer-VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B)), which are trained based on [VideoLLaMA3](https://github.com/DAMO-NLP-SG/VideoLLaMA3).
36
  * **[2025.4.22]** 🔥Our VideoRefer-Bench has been adopted in [Describe Anything Model](https://arxiv.org/pdf/2504.16072) (NVIDIA & UC Berkeley).
37
  * **[2025.2.27]** 🔥VideoRefer Suite has been accepted to CVPR2025!
 
9
  tags:
10
  - multimodal large language model
11
  - large video-language model
12
+ base_model:
13
+ - DAMO-NLP-SG/VideoLLaMA3-2B-Image
14
  ---
15
 
16
 
 
29
 
30
  <div style="display: flex; justify-content: center; margin-top: 10px;">
31
  <a href="https://arxiv.org/pdf/2501.00599"><img src="https://img.shields.io/badge/Arxiv-2501.00599-ECA8A7" style="margin-right: 5px;"></a>
32
+ <a href="https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3"><img src='https://img.shields.io/badge/HuggingFace-Demo-96D03A' style="margin-right: 5px;"></a>
33
  <a href="https://github.com/DAMO-NLP-SG/VideoRefer"><img src='https://img.shields.io/badge/Github-VideoRefer-F7C97E' style="margin-right: 5px;"></a>
34
  <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA3"><img src='https://img.shields.io/badge/Github-VideoLLaMA3-9DC3E6' style="margin-right: 5px;"></a>
35
  </div>
36
 
37
  ## 📰 News
38
+ * **[2025.6.19]** 🔥We release the [demo](https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3) of VideoRefer-VideoLLaMA3, hosted on HuggingFace. Feel free to try it!
39
  * **[2025.6.18]** 🔥We release a new version of VideoRefer([VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) and [VideoRefer-VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B)), which are trained based on [VideoLLaMA3](https://github.com/DAMO-NLP-SG/VideoLLaMA3).
40
  * **[2025.4.22]** 🔥Our VideoRefer-Bench has been adopted in [Describe Anything Model](https://arxiv.org/pdf/2504.16072) (NVIDIA & UC Berkeley).
41
  * **[2025.2.27]** 🔥VideoRefer Suite has been accepted to CVPR2025!