Spaces:
Runtime error
Runtime error
File size: 5,027 Bytes
a079abe 62f3899 a079abe f782089 bedb8e2 f782089 e9772d2 3645430 e9772d2 ca5f649 e9772d2 db26769 e9772d2 bedb8e2 e9772d2 afd68f9 e9772d2 7a7b1d3 e9772d2 d0252db 7a7b1d3 7765906 e9772d2 7765906 e9772d2 7765906 e9772d2 7765906 e9772d2 44051cc e9772d2 9f82d8d e9772d2 ddfbd88 3e5cdc3 e9772d2 3e5cdc3 e75d7f2 3e5cdc3 e9772d2 4f46c78 3e5cdc3 bf3a897 c1304d4 e9772d2 bf3a897 e9772d2 bf3a897 e9772d2 5cbac45 e9772d2 5cbac45 e9772d2 5cbac45 e9772d2 1f8a981 62f3899 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
license: apache-2.0
title: llmOS-Agent
sdk: docker
emoji: π
colorFrom: blue
colorTo: yellow
short_description: An LLM agent that can use a Linux VM to accomplish tasks.
hf_oauth: true
---
# llmOS-Agent
`llmOS-Agent` provides an asynchronous chat interface built around Ollama models. It supports running shell commands in an isolated Linux VM and persists conversations in SQLite.
## Features
- **Persistent chat history** β conversations are stored in `chat.db` per user and session so they can be resumed later.
- **Tool execution** β a built-in `execute_terminal` tool runs commands inside a Docker-based VM using `docker exec -i`. Network access is enabled and both stdout and stderr are captured (up to 10,000 characters). The VM is reused across chats when `PERSIST_VMS=1` so installed packages remain available.
- **System prompts** β every request includes a system prompt that guides the assistant to plan tool usage, verify results and avoid unnecessary jargon.
- **Gradio interface** β a web UI in `gradio_app.py` lets you chat and browse the VM file system. The Files tab now allows navigating any directory inside the container.
## Environment Variables
Several settings can be customised via environment variables:
- `DB_PATH` β location of the SQLite database (default `chat.db` in the project directory).
- `LOG_LEVEL` β logging verbosity (`DEBUG`, `INFO`, etc.).
- `VM_IMAGE` and `VM_STATE_DIR` control the Docker-based VM.
## Quick Start
```bash
python run.py
```
The script issues a sample command to the model and prints the streamed response. Uploaded files go to `uploads` and are mounted in the VM at `/data`.
### Uploading Documents
```python
async with ChatSession() as chat:
path = chat.upload_document("path/to/file.pdf")
async for part in chat.chat_stream(f"Summarize {path}"):
print(part)
```
## Discord Bot
1. Create a `.env` file with your bot token:
```bash
DISCORD_TOKEN="your-token"
```
2. Start the bot:
```bash
python -m bot
```
Attachments sent to the bot are uploaded automatically and the VM path is returned so they can be referenced in later messages.
## VM Configuration
The shell commands run inside a Docker container. By default the image defined by `VM_IMAGE` is used (falling back to `python:3.11-slim`). When `PERSIST_VMS=1` (default) each user keeps the same container across sessions. Set `VM_STATE_DIR` to choose where per-user data is stored on the host.
To build a more complete environment you can create your own image, for example using `docker/Dockerfile.vm`:
```Dockerfile
FROM ubuntu:22.04
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3 \
python3-pip \
sudo \
curl \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
CMD ["sleep", "infinity"]
```
Build and run with:
```bash
docker build -t llm-vm -f docker/Dockerfile.vm .
export VM_IMAGE=llm-vm
python run.py
```
## REST API
Start the API server either as a module or via `uvicorn`:
```bash
python -m api_app
# or
uvicorn api_app:app --host 0.0.0.0 --port 8000
```
### Endpoints
- `POST /chat/stream` β stream the assistant's response as plain text.
- `POST /upload` β upload a document that can be referenced in chats.
- `GET /sessions/{user}` β list available session names for a user.
- `GET /vm/{user}/list` β list files in a directory under `/data`.
- `GET /vm/{user}/file` β read a file from the VM.
- `POST /vm/{user}/file` β create or overwrite a file in the VM.
- `DELETE /vm/{user}/file` β delete a file or directory from the VM.
Example request:
```bash
curl -N -X POST http://localhost:8000/chat/stream \
-H 'Content-Type: application/json' \
-d '{"user":"demo","session":"default","prompt":"Hello"}'
```
### Security
Set one or more API keys in the ``API_KEYS`` environment variable. Requests must
include the ``X-API-Key`` header when keys are configured. A simple rate limit is
also enforced per key or client IP, configurable via ``RATE_LIMIT``.
## Command Line Interface
Run the interactive CLI on any platform:
```bash
python -m src.cli --user yourname
```
Existing sessions are listed and you can create new ones. Type messages to see streamed replies. Use `exit` or `Ctrl+D` to quit.
### Windows Executable
For a standalone Windows build install `pyinstaller` and run:
```bash
pyinstaller --onefile -n llm-chat cli_app/main.py
```
The resulting `llm-chat.exe` works on Windows 10/11.
## macOS GUI Application
A simple graphical client built with Tkinter lives in the `mac_gui` module. It
provides a text chat interface and supports file uploads via the REST API.
### Run the GUI
```bash
pip install -r requirements.txt
python -m mac_gui
```
Use the fields at the top of the window to configure the API URL, optional API
key, user name and session. Type a message and press **Send** to chat or click
**Upload** to select a document to upload. Responses stream into the main text
area. |