Papers
arxiv:2506.16500

SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity

Published on Jun 19
· Submitted by Skhaki on Jul 1
Authors:
,
,
,
,
,
,
,

Abstract

SparseLoRA reduces computational cost and speeds up fine-tuning of LLMs by dynamically selecting a sparse subset of weights for loss and gradient computation.

AI-generated summary

Fine-tuning LLMs is both computationally and memory-intensive. While parameter-efficient fine-tuning methods, such as QLoRA and DoRA, reduce the number of trainable parameters and lower memory usage, they do not decrease computational cost. In some cases, they may even slow down fine-tuning. In this paper, we introduce SparseLoRA, a method that accelerates LLM fine-tuning through contextual sparsity. We propose a lightweight, training-free SVD sparsity estimator that dynamically selects a sparse subset of weights for loss and gradient computation. Also, we systematically analyze and address sensitivity across layers, tokens, and training steps. Our experimental results show that SparseLoRA reduces computational cost by up to 2.2 times and a measured speedup of up to 1.6 times while maintaining accuracy across various downstream tasks, including commonsense and arithmetic reasoning, code generation, and instruction following.

Community

Paper author Paper submitter
edited Jul 1

This paper introduces SparseLoRA, a method that uses contextual sparsity to accelerate LLM fine-tuning, cutting compute by up to 2.2× and runtime by 1.6× while maintaining model accuracy on reasoning, coding, and instruction-following tasks.

Learn more at z-lab.ai/projects/sparselora

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.16500 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.16500 in a Space README.md to link it from this page.

Collections including this paper 1