Abstract
Mera Multi is an open multimodal evaluation framework for Russian-spoken architectures, addressing the lack of such benchmarks with 18 newly constructed tasks and a methodology to prevent benchmark leakage.
Multimodal large language models (MLLMs) are currently at the center of research attention, showing rapid progress in scale and capabilities, yet their intelligence, limitations, and risks remain insufficiently understood. To address these issues, particularly in the context of the Russian language, where no multimodal benchmarks currently exist, we introduce Mera Multi, an open multimodal evaluation framework for Russian-spoken architectures. The benchmark is instruction-based and encompasses default text, image, audio, and video modalities, comprising 18 newly constructed evaluation tasks for both general-purpose models and modality-specific architectures (image-to-text, video-to-text, and audio-to-text). Our contributions include: (i) a universal taxonomy of multimodal abilities; (ii) 18 datasets created entirely from scratch with attention to Russian cultural and linguistic specificity, unified prompts, and metrics; (iii) baseline results for both closed-source and open-source models; (iv) a methodology for preventing benchmark leakage, including watermarking and licenses for private sets. While our current focus is on Russian, the proposed benchmark provides a replicable methodology for constructing multimodal benchmarks in typologically diverse languages, particularly within the Slavic language family.
Community
This work introduces MERA Multi, the first large-scale multimodal benchmark for Russian, encompassing 18 newly developed tasks across text, image, audio, and video, with a unified skill taxonomy, robust leakage protection (utilizing watermarking and membership inference), and a public leaderboard and codebase for evaluating both open and closed MLLMs.
Good paper!
What is the most effective way to ensure that MLLMs for Russian achieve robust performance across image, audio, and video modalities, given the current performance gaps and cultural-linguistic specificity required?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps (2025)
- ChineseVideoBench: Benchmarking Multi-modal Large Models for Chinese Video Question Answering (2025)
- VCB Bench: An Evaluation Benchmark for Audio-Grounded Large Language Model Conversational Agents (2025)
- ThaiOCRBench: A Task-Diverse Benchmark for Vision-Language Understanding in Thai (2025)
- PISA-Bench: The PISA Index as a Multilingual and Multimodal Metric for the Evaluation of Vision-Language Models (2025)
- HinTel-AlignBench: A Framework and Benchmark for Hindi-Telugu with English-Aligned Samples (2025)
- InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
