Instella ✨ Collection Announcing Instella, a series of 3 billion parameter language models developed by AMD, trained from scratch on 128 Instinct MI300X GPUs. • 13 items • Updated 12 days ago • 10
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models Paper • 2504.09223 • Published Apr 12
TIPS: Topologically Important Path Sampling for Anytime Neural Networks Paper • 2305.08021 • Published May 13, 2023
Machine Unlearning for Image-to-Image Generative Models Paper • 2402.00351 • Published Feb 1, 2024 • 13
A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance Paper • 2308.13504 • Published Aug 25, 2023