FLEX: Continuous Agent Evolution via Forward Learning from Experience
Abstract
FLEX, a gradient-free learning paradigm, enables Large Language Model agents to continuously evolve through experience, improving performance in tasks like mathematical reasoning, chemical retrosynthesis, and protein fitness prediction.
Autonomous agents driven by Large Language Models (LLMs) have revolutionized reasoning and problem-solving but remain static after training, unable to grow with experience as intelligent beings do during deployment. We introduce Forward Learning with EXperience (FLEX), a gradient-free learning paradigm that enables LLM agents to continuously evolve through accumulated experience. Specifically, FLEX cultivates scalable and inheritable evolution by constructing a structured experience library through continual reflection on successes and failures during interaction with the environment. FLEX delivers substantial improvements on mathematical reasoning, chemical retrosynthesis, and protein fitness prediction (up to 23% on AIME25, 10% on USPTO50k, and 14% on ProteinGym). We further identify a clear scaling law of experiential growth and the phenomenon of experience inheritance across agents, marking a step toward scalable and inheritable continuous agent evolution. Project Page: https://flex-gensi-thuair.github.io.
Community
Welcome to FLEX, a novel learning paradigm that leverages entirely the forward inference ability of LLMs to make a step forward towards Continuous Agent Evolution!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EvolveR: Self-Evolving LLM Agents through an Experience-Driven Lifecycle (2025)
- Learning on the Job: An Experience-Driven Self-Evolving Agent for Long-Horizon Tasks (2025)
- Training-Free Group Relative Policy Optimization (2025)
- Scaling Agent Learning via Experience Synthesis (2025)
- ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory (2025)
- Continual Learning, Not Training: Online Adaptation For Agents (2025)
- Alita-G: Self-Evolving Generative Agent for Agent Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper