File size: 22,702 Bytes
2eb41d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
<!--Copyright 2024 The HuggingFace Team. All rights reserved.



Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0



Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on

an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the

specific language governing permissions and limitations under the License.



โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be

rendered properly in your Markdown viewer.



-->

# Agents - Guided tour



[[open-in-colab]]



In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case.



### Building your agent



To initialize a minimal agent, you need at least these two arguments:



- `model`, a text-generation model to power your agent - because the agent is different from a simple LLM, it is a system that uses a LLM as its engine. You can use any of these options:

    - [`TransformersModel`] takes a pre-initialized `transformers` pipeline to run inference on your local machine using `transformers`.

    - [`HfApiModel`] leverages a `huggingface_hub.InferenceClient` under the hood and supports all Inference Providers on the Hub.

    - [`LiteLLMModel`] similarly lets you call 100+ different models and providers through [LiteLLM](https://docs.litellm.ai/)!

    - [`AzureOpenAIServerModel`] allows you to use OpenAI models deployed in [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service).

    - [`MLXModel`] creates a [mlx-lm](https://pypi.org/project/mlx-lm/) pipeline to run inference on your local machine.



- `tools`, a list of `Tools` that the agent can use to solve the task. It can be an empty list. You can also add the default toolbox on top of your `tools` list by defining the optional argument `add_base_tools=True`.



Once you have these two arguments, `tools` and `model`,  you can create an agent and run it. You can use any LLM you'd like, either through [Inference Providers](https://huggingface.co/blog/inference-providers), [transformers](https://github.com/huggingface/transformers/), [ollama](https://ollama.com/), [LiteLLM](https://www.litellm.ai/), [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service), or [mlx-lm](https://pypi.org/project/mlx-lm/).



<hfoptions id="Pick a LLM">

<hfoption id="HF Inference API">



HF Inference API is free to use without a token, but then it will have a rate limit.



To access gated models or rise your rate limits with a PRO account, you need to set the environment variable `HF_TOKEN` or pass `token` variable upon initialization of `HfApiModel`. You can get your token from your [settings page](https://huggingface.co/settings/tokens)



```python

from smolagents import CodeAgent, HfApiModel



model_id = "meta-llama/Llama-3.3-70B-Instruct" 



model = HfApiModel(model_id=model_id, token="<YOUR_HUGGINGFACEHUB_API_TOKEN>") # You can choose to not pass any model_id to HfApiModel to use a default free model

# you can also specify a particular provider e.g. provider="together" or provider="sambanova"

agent = CodeAgent(tools=[], model=model, add_base_tools=True)



agent.run(

    "Could you give me the 118th number in the Fibonacci sequence?",

)

```

</hfoption>

<hfoption id="Local Transformers Model">



```python

# !pip install smolagents[transformers]

from smolagents import CodeAgent, TransformersModel



model_id = "meta-llama/Llama-3.2-3B-Instruct"



model = TransformersModel(model_id=model_id)

agent = CodeAgent(tools=[], model=model, add_base_tools=True)



agent.run(

    "Could you give me the 118th number in the Fibonacci sequence?",

)

```

</hfoption>

<hfoption id="OpenAI or Anthropic API">



To use `LiteLLMModel`, you need to set the environment variable `ANTHROPIC_API_KEY` or `OPENAI_API_KEY`, or pass `api_key` variable upon initialization.



```python

# !pip install smolagents[litellm]

from smolagents import CodeAgent, LiteLLMModel



model = LiteLLMModel(model_id="anthropic/claude-3-5-sonnet-latest", api_key="YOUR_ANTHROPIC_API_KEY") # Could use 'gpt-4o'

agent = CodeAgent(tools=[], model=model, add_base_tools=True)



agent.run(

    "Could you give me the 118th number in the Fibonacci sequence?",

)

```

</hfoption>

<hfoption id="Ollama">



```python

# !pip install smolagents[litellm]

from smolagents import CodeAgent, LiteLLMModel



model = LiteLLMModel(

    model_id="ollama_chat/llama3.2", # This model is a bit weak for agentic behaviours though

    api_base="http://localhost:11434", # replace with 127.0.0.1:11434 or remote open-ai compatible server if necessary

    api_key="YOUR_API_KEY", # replace with API key if necessary

    num_ctx=8192, # ollama default is 2048 which will fail horribly. 8192 works for easy tasks, more is better. Check https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator to calculate how much VRAM this will need for the selected model.

)



agent = CodeAgent(tools=[], model=model, add_base_tools=True)



agent.run(

    "Could you give me the 118th number in the Fibonacci sequence?",

)

```

</hfoption>

<hfoption id="Azure OpenAI">



To connect to Azure OpenAI, you can either use `AzureOpenAIServerModel` directly, or use `LiteLLMModel` and configure it accordingly.



To initialize an instance of `AzureOpenAIServerModel`, you need to pass your model deployment name and then either pass the `azure_endpoint`, `api_key`, and `api_version` arguments, or set the environment variables `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_API_KEY`, and `OPENAI_API_VERSION`.



```python

# !pip install smolagents[openai]

from smolagents import CodeAgent, AzureOpenAIServerModel



model = AzureOpenAIServerModel(model_id="gpt-4o-mini")

agent = CodeAgent(tools=[], model=model, add_base_tools=True)



agent.run(

    "Could you give me the 118th number in the Fibonacci sequence?",

)

```



Similarly, you can configure `LiteLLMModel` to connect to Azure OpenAI as follows:



- pass your model deployment name as `model_id`, and make sure to prefix it with `azure/`

- make sure to set the environment variable `AZURE_API_VERSION`

- either pass the `api_base` and `api_key` arguments, or set the environment variables `AZURE_API_KEY`, and `AZURE_API_BASE`



```python

import os

from smolagents import CodeAgent, LiteLLMModel



AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="gpt-35-turbo-16k-deployment" # example of deployment name



os.environ["AZURE_API_KEY"] = "" # api_key

os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"

os.environ["AZURE_API_VERSION"] = "" # "2024-10-01-preview"



model = LiteLLMModel(model_id="azure/" + AZURE_OPENAI_CHAT_DEPLOYMENT_NAME)

agent = CodeAgent(tools=[], model=model, add_base_tools=True)



agent.run(

   "Could you give me the 118th number in the Fibonacci sequence?",

)

```



</hfoption>

<hfoption id="mlx-lm">



```python

# !pip install smolagents[mlx-lm]

from smolagents import CodeAgent, MLXModel



mlx_model = MLXModel("mlx-community/Qwen2.5-Coder-32B-Instruct-4bit")

agent = CodeAgent(model=mlx_model, tools=[], add_base_tools=True)



agent.run("Could you give me the 118th number in the Fibonacci sequence?")

```



</hfoption>

</hfoptions>



#### CodeAgent and ToolCallingAgent



The [`CodeAgent`] is our default agent. It will write and execute python code snippets at each step.



By default, the execution is done in your local environment.

This should be safe because the only functions that can be called are the tools you provided (especially if it's only tools by Hugging Face) and a set of predefined safe functions like `print` or functions from the `math` module, so you're already limited in what can be executed.



The Python interpreter also doesn't allow imports by default outside of a safe list, so all the most obvious attacks shouldn't be an issue.

You can authorize additional imports by passing the authorized modules as a list of strings in argument `additional_authorized_imports` upon initialization of your [`CodeAgent`]:



```py

model = HfApiModel()

agent = CodeAgent(tools=[], model=model, additional_authorized_imports=['requests', 'bs4'])

agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")

```



> [!WARNING]

> The LLM can generate arbitrary code that will then be executed: do not add any unsafe imports!



The execution will stop at any code trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent.



You can also use [E2B code executor](https://e2b.dev/docs#what-is-e2-b) or Docker instead of a local Python interpreter. For E2B, first [set the `E2B_API_KEY` environment variable](https://e2b.dev/dashboard?tab=keys) and then pass `executor_type="e2b"` upon agent initialization. For Docker, pass `executor_type="docker"` during initialization.





> [!TIP]

> Learn more about code execution [in this tutorial](tutorials/secure_code_execution).



We also support the widely-used way of writing actions as JSON-like blobs: this is [`ToolCallingAgent`], it works much in the same way like [`CodeAgent`], of course without `additional_authorized_imports` since it doesn't execute code:



```py

from smolagents import ToolCallingAgent



agent = ToolCallingAgent(tools=[], model=model)

agent.run("Could you get me the title of the page at url 'https://huggingface.co/blog'?")

```



### Inspecting an agent run



Here are a few useful attributes to inspect what happened after a run:

- `agent.logs` stores the fine-grained logs of the agent. At every step of the agent's run, everything gets stored in a dictionary that then is appended to `agent.logs`.

- Running `agent.write_memory_to_messages()` writes the agent's memory as list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another message. Use this if you want a higher-level view of what has happened - but not every log will be transcripted by this method.



## Tools



A tool is an atomic function to be used by an agent. To be used by an LLM, it also needs a few attributes that constitute its API and will be used to describe to the LLM how to call this tool:

- A name

- A description

- Input types and descriptions

- An output type



You can for instance check the [`PythonInterpreterTool`]: it has a name, a description, input descriptions, an output type, and a `forward` method to perform the action.



When the agent is initialized, the tool attributes are used to generate a tool description which is baked into the agent's system prompt. This lets the agent know which tools it can use and why.



### Default toolbox



`smolagents` comes with a default toolbox for empowering agents, that you can add to your agent upon initialization with argument `add_base_tools = True`:



- **DuckDuckGo web search***: performs a web search using DuckDuckGo browser.

- **Python code interpreter**: runs your LLM generated Python code in a secure environment. This tool will only be added to [`ToolCallingAgent`] if you initialize it with `add_base_tools=True`, since code-based agent can already natively execute Python code

- **Transcriber**: a speech-to-text pipeline built on Whisper-Turbo that transcribes an audio to text.



You can manually use a tool by calling it with its arguments.



```python

from smolagents import DuckDuckGoSearchTool



search_tool = DuckDuckGoSearchTool()

print(search_tool("Who's the current president of Russia?"))

```



### Create a new tool



You can create your own tool for use cases not covered by the default tools from Hugging Face.

For example, let's create a tool that returns the most downloaded model for a given task from the Hub.



You'll start with the code below.



```python

from huggingface_hub import list_models



task = "text-classification"



most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))

print(most_downloaded_model.id)

```



This code can quickly be converted into a tool, just by wrapping it in a function and adding the `tool` decorator:

This is not the only way to build the tool: you can directly define it as a subclass of [`Tool`], which gives you more flexibility, for instance the possibility to initialize heavy class attributes.



Let's see how it works for both options:



<hfoptions id="build-a-tool">

<hfoption id="Decorate a function with @tool">



```py

from smolagents import tool



@tool

def model_download_tool(task: str) -> str:

    """

    This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub.

    It returns the name of the checkpoint.



    Args:

        task: The task for which to get the download count.

    """

    most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))

    return most_downloaded_model.id

```



The function needs:

- A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.

- Type hints on both inputs and output

- A description, that includes an 'Args:' part where each argument is described (without a type indication this time, it will be pulled from the type hint). Same as for the tool name, this description is an instruction manual for the LLM powering you agent, so do not neglect it.

All these elements will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!



> [!TIP]

> This definition format is the same as tool schemas used in `apply_chat_template`, the only difference is the added `tool` decorator: read more on our tool use API [here](https://huggingface.co/blog/unified-tool-use#passing-tools-to-a-chat-template).

</hfoption>

<hfoption id="Subclass Tool">



```py

from smolagents import Tool



class ModelDownloadTool(Tool):

    name = "model_download_tool"

    description = "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. It returns the name of the checkpoint."

    inputs = {"task": {"type": "string", "description": "The task for which to get the download count."}}

    output_type = "string"



    def forward(self, task: str) -> str:

        most_downloaded_model = next(iter(list_models(filter=task, sort="downloads", direction=-1)))

        return most_downloaded_model.id

```



The subclass needs the following attributes:

- A clear `name`. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let's name it `model_download_tool`.

- A `description`. Same as for the `name`, this description is an instruction manual for the LLM powering you agent, so do not neglect it.

- Input types and descriptions

- Output type

All these attributes will be automatically baked into the agent's system prompt upon initialization: so strive to make them as clear as possible!

</hfoption>

</hfoptions>





Then you can directly initialize your agent:

```py

from smolagents import CodeAgent, HfApiModel

agent = CodeAgent(tools=[model_download_tool], model=HfApiModel())

agent.run(

    "Can you give me the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?"

)

```



You get the following logs:

```text

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ New run โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ

โ”‚                                                                                          โ”‚

โ”‚ Can you give me the name of the model that has the most downloads in the 'text-to-video' โ”‚

โ”‚ task on the Hugging Face Hub?                                                            โ”‚

โ”‚                                                                                          โ”‚

โ•ฐโ”€ HfApiModel - Qwen/Qwen2.5-Coder-32B-Instruct โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” Step 0 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

โ•ญโ”€ Executing this code: โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ

โ”‚   1 model_name = model_download_tool(task="text-to-video")                               โ”‚

โ”‚   2 print(model_name)                                                                    โ”‚

โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Execution logs:

ByteDance/AnimateDiff-Lightning



Out: None

[Step 0: Duration 0.27 seconds| Input tokens: 2,069 | Output tokens: 60]

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” Step 1 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

โ•ญโ”€ Executing this code: โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ

โ”‚   1 final_answer("ByteDance/AnimateDiff-Lightning")                                      โ”‚

โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Out - Final answer: ByteDance/AnimateDiff-Lightning

[Step 1: Duration 0.10 seconds| Input tokens: 4,288 | Output tokens: 148]

Out[20]: 'ByteDance/AnimateDiff-Lightning'

```



> [!TIP]

> Read more on tools in the [dedicated tutorial](./tutorials/tools#what-is-a-tool-and-how-to-build-one).



## Multi-agents



Multi-agent systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155).



In this type of framework, you have several agents working together to solve your task instead of only one.

It empirically yields better performance on most benchmarks. The reason for this better performance is conceptually simple: for many tasks, rather than using a do-it-all system, you would prefer to specialize units on sub-tasks. Here, having agents with separate tool sets and memories allows to achieve efficient specialization. For instance, why fill the memory of the code generating agent with all the content of webpages visited by the web search agent? It's better to keep them separate.



You can easily build hierarchical multi-agent systems with `smolagents`.



To do so, just ensure your agent has `name` and`description` attributes, which will then be embedded in the manager agent's system prompt to let it know how to call this managed agent, as we also do for tools.

Then you can pass this managed agent in the parameter managed_agents upon initialization of the manager agent.



Here's an example of making an agent that managed a specific web search agent using our [`DuckDuckGoSearchTool`]:



```py

from smolagents import CodeAgent, HfApiModel, DuckDuckGoSearchTool



model = HfApiModel()



web_agent = CodeAgent(

    tools=[DuckDuckGoSearchTool()],

    model=model,

    name="web_search",

    description="Runs web searches for you. Give it your query as an argument."

)



manager_agent = CodeAgent(

    tools=[], model=model, managed_agents=[web_agent]

)



manager_agent.run("Who is the CEO of Hugging Face?")

```



> [!TIP]

> For an in-depth example of an efficient multi-agent implementation, see [how we pushed our multi-agent system to the top of the GAIA leaderboard](https://huggingface.co/blog/beating-gaia).





## Talk with your agent and visualize its thoughts in a cool Gradio interface



You can use `GradioUI` to interactively submit tasks to your agent and observe its thought and execution process, here is an example:



```py

from smolagents import (

    load_tool,

    CodeAgent,

    HfApiModel,

    GradioUI

)



# Import tool from Hub

image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)



model = HfApiModel(model_id)



# Initialize the agent with the image generation tool

agent = CodeAgent(tools=[image_generation_tool], model=model)



GradioUI(agent).launch()

```



Under the hood, when the user types a new answer, the agent is launched with `agent.run(user_request, reset=False)`.

The `reset=False` flag means the agent's memory is not flushed before launching this new task, which lets the conversation go on.



You can also use this `reset=False` argument to keep the conversation going in any other agentic application.



## Next steps



Finally, when you've configured your agent to your needs, you can share it to the Hub!



```py

agent.push_to_hub("m-ric/my_agent")

```



Similarly, to load an agent that has been pushed to hub, if you trust the code from its tools, use:

```py

agent.from_hub("m-ric/my_agent", trust_remote_code=True)

```



For more in-depth usage, you will then want to check out our tutorials:

- [the explanation of how our code agents work](./tutorials/secure_code_execution)

- [this guide on how to build good agents](./tutorials/building_good_agents).

- [the in-depth guide for tool usage](./tutorials/building_good_agents).