VLMs

non-profit
Activity Feed

AI & ML interests

None defined yet.

merveΒ 
posted an update about 19 hours ago
view post
Post
292
we're all sleeping on this OCR model rednote-hilab/dots.ocr πŸ”₯

dots.ocr is a new 3B model with sota performance, support for 100 languages & allowing commercial use! 🀯

single e2e model to extract image, convert tables, formula, and more into markdown πŸ“
try it MohamedRashad/Dots-OCR
merveΒ 
posted an update 1 day ago
view post
Post
424
massive releases and tons of Flux 1. Krea LoRas past week!
here's some of the picks, find more models in collection 🫑 merve/releases-august-2-6890c14248203522b7d0267f

LLMs πŸ’¬
> Tencent dropped tencent/Hunyuan-7B-Instruct
> Qwen released Qwen/Qwen3-Coder-30B-A3B-Instruct, 30B MoE with 3B params for coding (OS)

vision/multimodal
> RedNote released rednote-hilab/dots.ocr - 3B OCR model (OS)
> Cohere released CohereLabs/command-a-vision-07-2025 - 112B (dense!) VLM for 6 languages
> StepFun-AI shipped stepfun-ai/step3 - 321B MoE VLM (OS)
> Skywork shipped Skywork/Skywork-UniPic-1.5B - new any-to-any model (image+text β†’ image+text) (OS)
merveΒ 
posted an update 6 days ago
merveΒ 
posted an update 7 days ago
view post
Post
3499
past week in open AI was insane πŸ”₯ here's some of picks, find more here merve/releases-july-25-688768ca47fe3693407e02d1

πŸ’¬ LLMs & VLMs
> Qwen/Qwen3-235B-A22B-Thinking-2507 had a new update (OS)
> Qwen/Qwen3-Coder-480B-A35B-Instruct is out with 480B total 35B active params 🀯 (OS)
> AllenAI dropped an update to allenai/olmOCR-7B-0725 πŸ“
> InternLM released internlm/Intern-S1 - 235B Qwen3 MoE + 6B InternViT encoder (OS)
> OmniSVG/OmniSVG is a new SVG generation VLM (OS)

πŸ–ΌοΈ image/video/3D generation
> WanAI released Wan2.2 series - both T2V and I2V 14B models for high-quality video generation (OS) multimodalart/wan-22-688767e313337b434ed55112
> Tencent dropped tencent/HunyuanWorld-1 - image-to-3D scene generation model
  • 1 reply
Β·
merveΒ 
posted an update 9 days ago
view post
Post
4303
🀯 241B VLM with apache-2.0 license internlm/Intern-S1

internlm released Intern-S1: multimodal reasoning model based on 235B MoE Qwen3 and 6B InternViT 😍

benchmarks look great (πŸ‘‘ best model βœ… best open model)
anditoΒ 
posted an update 14 days ago
view post
Post
2706
Many VLMs claim to process hours of video. But can they follow the story?πŸ€”
Today, we introduce TimeScope: The benchmark that separates true temporal understanding from marketing hype. Let's see how much VLMs really understand!⏳

We test three skills that matter for real-world use:
πŸ”Ž Localized Retrieval: Find a specific action.
🧩 Information Synthesis: Piece together scattered clues.
πŸƒ Fine-Grained Perception: Analyze detailed motion (e.g., count how many times a person swings an axe).

The results are in, and they're revealing. Only Gemini 2.5 pro handles 1-hour-long videos.
Performance drops sharply with duration, proving that long video understanding is still challenging. We've found the breaking pointsβ€”now the community can start fixing them.πŸ“ˆ

Want to learn more? TimeScope is 100% open-source. Benchmark your model and help us build the next generation of video AI.

πŸ“– Blog:
https://huggingface.co/blog/timescope-video-lmm-benchmark
πŸ‘©β€πŸ’» Leaderboard & Demo: Apollo-LMMs/TimeScope
πŸ“Š Dataset: Apollo-LMMs/TimeScope
βš™οΈ Eval Code: https://github.com/EvolvingLMMs-Lab/lmms-eval
merveΒ 
posted an update 14 days ago
view post
Post
777
so many open LLMs and image LoRAs dropped past week, here's some picks for you 🫑 merve/releases-july-18-687e3fbd2ab9b39c51f9238b

LLMs
> ByteDance released a bunch of translation models called Seed-X-RM (7B) ByteDance-Seed/Seed-X-RM-7B
> NVIDIA released reasoning models of which 32B surpassing the giant Qwen3-235B with cc-by-4.0 license πŸ‘ nvidia/openreasoning-nemotron-687730dae0170059860f1f01
> LG released a new EXAONE model (32B) LGAI-EXAONE/EXAONE-4.0-32B

VLMs/any-to-any
> vidore/colqwen-omni-v0.1 is a new any-to-any retriever (MIT)
> HiDream-ai/HiDream-E1-1 is image+text in image+text out model (MIT)

LoRAs
> There's a bunch of LoRAs based on Flux Kontext, gotta check out the collection 🀠
merveΒ 
posted an update 16 days ago
merveΒ 
posted an update 20 days ago
merveΒ 
posted an update 21 days ago
view post
Post
2600
Fine-tune Gemma3n on videos with audios inside with Colab A100 πŸ”₯
Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!

keep in mind, it's made for educational purposes 🫑 we do LoRA, audio resampling & video downsampling to be able to train <40GB VRAM

stretch modalities and unfreeze layers as you wish! πŸ™πŸ» merve/smol-vision
  • 1 reply
Β·
merveΒ 
posted an update 23 days ago
view post
Post
2430
past week had huuuge releases πŸ’—
here's our picks πŸ”₯ find more models, datasets, demos here merve/releases-july-11-68750452c358c98b0fa663f7

> moonshotai/Kimi-K2-Instruct is the new sota LLM with 1T total 32B active parameters 🀯

> HuggingFaceTB/SmolLM3-3B is the new best LM for it's size, offers thinking mode πŸ’­ as well as the dataset HuggingFaceTB/smoltalk2

> Alibaba-NLP/WebSailor-3B is the new agentic LLM for complex browsing

> Google DeepMind released medical vision LMs with an agentic doctor-patient app google/medgemma-release-680aade845f90bec6a3f60c4

> fal released a LoRA to improve details on face images fal/Realism-Detailer-Kontext-Dev-LoRA
merveΒ 
posted an update 28 days ago
view post
Post
3119
GitHub refuses to render notebooks for a long time now πŸ’”

so smol-vision now lives in Hugging Face model repository πŸ€— merve/smol-vision
  • 1 reply
Β·
merveΒ 
posted an update 29 days ago
view post
Post
3447
ByteDance released Tar 1.5B and 7B: image-text in image-text out models, fully open-source πŸ‘ ByteDance-Seed/tar-6864cf0d9fe59a3b91cc4260

They have an image tokenizer unified with text, and they de-tokenize using either of two models (LLM and diffusion)
The model is actually a full LLM (Qwen2), the tokenizer converts image tokens 🀯
merveΒ 
posted an update 30 days ago
view post
Post
3690
Huge drops in open AI past week!
Find more models, datasets, demos here merve/releases-july-4-686bcc54ed7c45c341fbf654
Some of our picks 🫑
⏯️ BAAI/MTVCraft is a new Veo3-like text-to-video model, demo is here BAAI/MTVCraft
πŸ§‘πŸ»β€πŸ’» apple/diffucoder-6868139f56672ae046fe04e8 is a new family of diffusion LLMs (7B base and instruct) for coding
πŸ—£οΈ kyutai/tts-1.6b-en_fr is a new small TTS model for English and France
πŸ‘€ aharley/alltracker is a new pixel tracking model by Stanford, demo is here aharley/alltracker
πŸ“– racineai/OGC_MEGA_MultiDomain_DocRetrieval is a new large visual document retrieval dataset
  • 1 reply
Β·
anditoΒ 
posted an update about 1 month ago
view post
Post
3973
πŸ§ πŸ‘οΈ Can AI visualize solutions?

Humans often solve visual problems by sketching ideas in our minds. What if Vision-Language Models (VLMs) could do something similar, not by generating full images, but by using internal β€œmental sketches”?

That’s the idea behind Mirage, a new framework that empowers VLMs to reason using latent visual tokens. Instead of just thinking in words, Mirage mixes in abstract visual representations that help the model solve complex tasks.

These aren't photorealistic images. They're compact, internal representations optimized purely to support reasoning.

πŸ”§ Mirage is trained in two phases:

1) Grounding: It learns to produce latent tokens anchored in real images.
2) Refinement: The model drops the images and learns to generate visual tokens on its own.

πŸ“ˆ And yes, it works!
On challenging benchmarks like Visual Spatial Planning, Jigsaw puzzles, and Spatial Attention Tasks, Mirage clearly outperforms GPT-4o and other strong baselines.
Smart sketches > empty words.

By mimicking the way humans visualize solutions, Mirage gives AI a new kind of imagination, one that’s faster, more efficient, and more human-like.
Kudos to the teams at UMass Amherst and MIT behind this exciting work.
Check the paper: Machine Mental Imagery: Empower Multimodal Reasoning with Latent Visual Tokens (2506.17218)
Β·
merveΒ 
posted an update about 1 month ago
view post
Post
963
SOOOO MANY MODEL RELEASES 😍
Here's some picks from past week πŸ€—

> ByteDance/XVerse is a new identity preserving image generation model πŸ–ΌοΈ
> google/gemma-3n-E4B-it, any-to-text model supported by transformers πŸ€—
> nvidia/llama-nemoretriever-colembed-3b-v1 two new state-of-the-art visual document retrievers πŸ“‘
> New version of Dia TTS model is up nari-labs/Dia-1.6B-0626
> Black Forest Labs releases Kontext benchmark black-forest-labs/kontext-bench

Find more here merve/releases-june-27-6864e8eb17f7e3a8b444083c
merveΒ 
posted an update about 1 month ago
view post
Post
3038
visual reasoning is now in transformers πŸ”₯
https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking is just released and merged into transformers, we gave it a vibe test run 🀠

it's very good, comes with 64k context length and MIT license 😍
it supports 4k image tokens and any aspect ratio as well!
Notebook: http://colab.research.google.com/drive/1atODIiV57hOZLv16Bjzwd6fwx0yoTorj?usp=sharing
Demo: https://huggingface.co/spaces/THUDM/GLM-4.1V-9B-Thinking-Demo
merveΒ 
posted an update about 1 month ago
merveΒ 
posted an update about 1 month ago
view post
Post
616
Dataset Viewer for PDFs just landed on Hugging Face πŸ“–πŸ€— you can now preview all the PDFs easier than before!

on top of this, there's PdfFolder format to load the PDF datasets quicker πŸ’¨
> to use it, your dataset should follow a directory format like folder/train/doc1.pdf, folder/train/doc1.pdf
> if you want to include bounding boxes, labels etc. you can keep them in a metadata.csv file in the same folder 🀝

read document dataset docs https://huggingface.co/docs/datasets/main/en/document_dataset
check all the document datasets here https://huggingface.co/datasets?modality=modality:document&sort=trending πŸ“–
  • 1 reply
Β·
merveΒ 
posted an update about 1 month ago
view post
Post
654
we've merged LightGlue keypoint matcher to Hugging Face transformers! it allows commercial use when paired with an open-source keypoint detector πŸ™πŸ»

it works very well, try it yourself: ETH-CVG/LightGlue

here's an in-the-wild test with two images of the same place ‡️
  • 1 reply
Β·