Spaces:
Running
Running
File size: 11,955 Bytes
2eb41d7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 |
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Secure code execution
[[open-in-colab]]
> [!TIP]
> If you're new to building agents, make sure to first read the [intro to agents](../conceptual_guides/intro_agents) and the [guided tour of smolagents](../guided_tour).
### Code agents
[Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingface.co/papers/2411.01747) [papers](https://huggingface.co/papers/2401.00812) have shown that having the LLM write its actions (the tool calls) in code is much better than the current standard format for tool calling, which is across the industry different shades of "writing actions as a JSON of tools names and arguments to use".
Why is code better? Well, because we crafted our code languages specifically to be great at expressing actions performed by a computer. If JSON snippets were a better way, this package would have been written in JSON snippets and the devil would be laughing at us.
Code is just a better way to express actions on a computer. It has better:
- **Composability:** could you nest JSON actions within each other, or define a set of JSON actions to re-use later, the same way you could just define a python function?
- **Object management:** how do you store the output of an action like `generate_image` in JSON?
- **Generality:** code is built to express simply anything you can have a computer do.
- **Representation in LLM training corpus:** why not leverage this benediction of the sky that plenty of quality actions have already been included in LLM training corpus?
This is illustrated on the figure below, taken from [Executable Code Actions Elicit Better LLM Agents](https://huggingface.co/papers/2402.01030).
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/code_vs_json_actions.png">
This is why we put emphasis on proposing code agents, in this case python agents, which meant putting higher effort on building secure python interpreters.
### Local code execution??
By default, the `CodeAgent` runs LLM-generated code in your environment.
This is inherently risky, LLM-generated code could be harmful to your environment.
One could argue that on the [spectrum of agency](../conceptual_guides/intro_agents), code agents give much higher agency to the LLM on your system than other less agentic setups: this goes hand-in-hand with higher risk.
So you need to be mindful of security.
To add a first layer of security, code execution in `smolagents` is not performed by the vanilla Python interpreter.
We have re-built a more secure `LocalPythonExecutor` from the ground up.
To be precise, this interpreter works by loading the Abstract Syntax Tree (AST) from your Code and executes it operation by operation, making sure to always follow certain rules:
- By default, imports are disallowed unless they have been explicitly added to an authorization list by the user.
- Even so, because some innocuous packages like `re` can give access to potentially harmful packages as in `re.subprocess`, subpackages that match a list of dangerous patterns are not imported.
- The total count of elementary operations processed is capped to prevent infinite loops and resource bloating.
- Any operation that has not been explicitly defined in our custom interpreter will raise an error.
As a result, this interpreter is safer. We have used it on a diversity of use cases, without ever observing any damage to the environment.
However, this solution is certainly not watertight, as no local python sandbox can really be: one could imagine occasions where LLMs fine-tuned for malignant actions could still hurt your environment.
For instance, if you have allowed an innocuous package like `Pillow` to process images, the LLM could generate thousands of image saves to bloat your hard drive.
Other examples of attacks can be found [here](https://gynvael.coldwind.pl/n/python_sandbox_escape).
Running these targeted malicious code snippet require a supply chain attack, meaning the LLM you use has been intoxicated.
The likelihood of this happening is low when using well-known LLMs from trusted inference providers, but it is still non-zero.
> [!WARNING]
> The only way to run LLM-generated code securely is to isolate the execution from your local environment.
So if you want to exercise caution, you should use a remote execution sandbox.
Here are examples of how to do it.
## Sandbox setup for secure code execution
When working with AI agents that execute code, security is paramount. This guide describes how to set up and use secure sandboxes for your agent applications using either E2B cloud sandboxes or local Docker containers.
### E2B setup
#### Installation
1. Create an E2B account at [e2b.dev](https://e2b.dev)
2. Install the required packages:
```bash
pip install 'smolagents[e2b]'
```
#### Running your agent in E2B: mono agents
We provide a simple way to use an E2B Sandbox: simply add `executor_type="e2b"` to the agent initialization, like:
```py
from smolagents import HfApiModel, CodeAgent
agent = CodeAgent(model=HfApiModel(), tools=[], executor_type="e2b")
agent.run("Can you give me the 100th Fibonacci number?")
```
However, this does not work (yet) with more complicated multi-agent setups.
#### Running your agent in E2B: multi-agents
To use multi-agents in an E2B sandbox, you need to run your agents completely from within E2B.
Here is how to do it:
```python
from e2b_code_interpreter import Sandbox
import os
# Create the sandbox
sandbox = Sandbox()
# Install required packages
sandbox.commands.run("pip install smolagents")
def run_code_raise_errors(sandbox, code: str, verbose: bool = False) -> str:
execution = sandbox.run_code(
code,
envs={'HF_TOKEN': os.getenv('HF_TOKEN')}
)
if execution.error:
execution_logs = "\n".join([str(log) for log in execution.logs.stdout])
logs = execution_logs
logs += execution.error.traceback
raise ValueError(logs)
return "\n".join([str(log) for log in execution.logs.stdout])
# Define your agent application
agent_code = """
import os
from smolagents import CodeAgent, HfApiModel
# Initialize the agents
agent = CodeAgent(
model=HfApiModel(token=os.getenv("HF_TOKEN"), provider="together"),
tools=[],
name="coder_agent",
description="This agent takes care of your difficult algorithmic problems using code."
)
manager_agent = CodeAgent(
model=HfApiModel(token=os.getenv("HF_TOKEN"), provider="together"),
tools=[],
managed_agents=[agent],
)
# Run the agent
response = manager_agent.run("What's the 20th Fibonacci number?")
print(response)
"""
# Run the agent code in the sandbox
execution_logs = run_code_raise_errors(sandbox, agent_code)
print(execution_logs)
```
### Docker setup
#### Installation
1. [Install Docker on your system](https://docs.docker.com/get-started/get-docker/)
2. Install the required packages:
```bash
pip install 'smolagents[docker]'
```
#### Setting up the docker sandbox
Create a Dockerfile for your agent environment:
```dockerfile
FROM python:3.10-bullseye
# Install build dependencies
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
python3-dev && \
pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir smolagents && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Run with limited privileges
USER nobody
# Default command
CMD ["python", "-c", "print('Container ready')"]
```
Create a sandbox manager to run code:
```python
import docker
import os
from typing import Optional
class DockerSandbox:
def __init__(self):
self.client = docker.from_env()
self.container = None
def create_container(self):
try:
image, build_logs = self.client.images.build(
path=".",
tag="agent-sandbox",
rm=True,
forcerm=True,
buildargs={},
# decode=True
)
except docker.errors.BuildError as e:
print("Build error logs:")
for log in e.build_log:
if 'stream' in log:
print(log['stream'].strip())
raise
# Create container with security constraints and proper logging
self.container = self.client.containers.run(
"agent-sandbox",
command="tail -f /dev/null", # Keep container running
detach=True,
tty=True,
mem_limit="512m",
cpu_quota=50000,
pids_limit=100,
security_opt=["no-new-privileges"],
cap_drop=["ALL"],
environment={
"HF_TOKEN": os.getenv("HF_TOKEN")
},
)
def run_code(self, code: str) -> Optional[str]:
if not self.container:
self.create_container()
# Execute code in container
exec_result = self.container.exec_run(
cmd=["python", "-c", code],
user="nobody"
)
# Collect all output
return exec_result.output.decode() if exec_result.output else None
def cleanup(self):
if self.container:
try:
self.container.stop()
except docker.errors.NotFound:
# Container already removed, this is expected
pass
except Exception as e:
print(f"Error during cleanup: {e}")
finally:
self.container = None # Clear the reference
# Example usage:
sandbox = DockerSandbox()
try:
# Define your agent code
agent_code = """
import os
from smolagents import CodeAgent, HfApiModel
# Initialize the agent
agent = CodeAgent(
model=HfApiModel(token=os.getenv("HF_TOKEN"), provider="together"),
tools=[]
)
# Run the agent
response = agent.run("What's the 20th Fibonacci number?")
print(response)
"""
# Run the code in the sandbox
output = sandbox.run_code(agent_code)
print(output)
finally:
sandbox.cleanup()
```
### Best practices for sandboxes
These key practices apply to both E2B and Docker sandboxes:
- Resource management
- Set memory and CPU limits
- Implement execution timeouts
- Monitor resource usage
- Security
- Run with minimal privileges
- Disable unnecessary network access
- Use environment variables for secrets
- Environment
- Keep dependencies minimal
- Use fixed package versions
- If you use base images, update them regularly
- Cleanup
- Always ensure proper cleanup of resources, especially for Docker containers, to avoid having dangling containers eating up resources.
✨ By following these practices and implementing proper cleanup procedures, you can ensure your agent runs safely and efficiently in a sandboxed environment. |