
Researchers from multiple universities have introduced a new framework, Memento-Skills, that aims to make autonomous AI agents more adaptable without retraining the underlying large language models (LLMs).
The core problem they target is well known in industry; once an LLM is deployed, its parameters are essentially frozen. That means the system is largely limited to what it knew at training time plus whatever can be squeezed into a context window. Updating or extending those capabilities usually requires costly fine-tuning or extensive manual engineering of new tools and skills.
Memento-Skills proposes a different route. Instead of changing the model itself, it gives agents an external structure for learning and refining skills over time, based on feedback from their environment.
According to the paper, Memento-Skills acts as an evolving external memory layer wrapped around existing language models. Rather than modifying model weights, the framework organizes and stores skills that agents can draw on and update as they operate.
These skills are not fixed prompts built once and forgotten. Instead, they form a library that can be revised and expanded as the agent encounters new situations and receives feedback. In effect, the agent’s capabilities can grow while the core LLM remains unchanged.
Jun Wang, a co-author of the paper, told VentureBeat that Memento-Skills adds continual learning capabilities to current market offerings such as OpenClaw and Claude Code. The emphasis is on enabling agents to develop their own skills, rather than relying solely on predefined behaviours.
For enterprise teams running AI agents in production, that design choice speaks directly to operational pain points. The conventional alternatives are:
- Fine-tuning the model’s parameters, which demands large curated datasets, significant compute resources and lengthy iteration cycles.
- Manually building and maintaining skill libraries or toolchains, which quickly becomes a heavy engineering and maintenance burden.
By treating skills as an adaptable external memory, Memento-Sills aims to sidestep both the cost of retraining and the overhead of hand-crafted skill design.
The framework is positioned in response to a broader challenge: how to build self-evolving agents on top of “frozen” language models. Once an LLM is shipped into production, its training is effectively complete. The knowledge it encodes is locked in, and the system’s performance is constrained by that static snapshot of the world.
External memory scaffolding offers a way around that limitation. By giving an agent somewhere to store, organise and refine what it learns after deployment, developers can avoid retraining loops while still allowing behaviour to improve over time.
Existing approaches to this problem often rely on skills designed by humans to cover new or changing tasks. Some automatic methods do exist, but according to the description of the research, many of them end up producing text-only guides that mainly act as prompt optimizations. Others focus on logging single-task trajectories without turning those experiences into reusable, general skills.
Memento-Skills, by contrast, is presented as a way to systematically manage an evolving set of skills that the agent itself can adapt. This positions it as a continual learning layer that can sit alongside tools already deployed in the market, rather than a replacement for the underlying LLM infrastructure.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







