CompeteSMoE -- Statistically Guaranteed Mixture of Experts Training via Competition Paper β’ 2505.13380 β’ Published May 19 β’ 5
SemViQA: A Semantic Question Answering System for Vietnamese Information Fact-Checking Paper β’ 2503.00955 β’ Published Mar 2 β’ 27
LIBMoE: A Library for comprehensive benchmarking Mixture of Experts in Large Language Models Paper β’ 2411.00918 β’ Published Nov 1, 2024 β’ 8
CodeMMLU: A Multi-Task Benchmark for Assessing Code Understanding Capabilities of CodeLLMs Paper β’ 2410.01999 β’ Published Oct 2, 2024 β’ 10
Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models Paper β’ 2312.15297 β’ Published Dec 23, 2023 β’ 1