File size: 4,089 Bytes
78797ac
 
 
 
 
 
d1ff7ba
78797ac
 
5faa415
78797ac
 
 
d1ff7ba
78797ac
537fcab
78797ac
d1ff7ba
5faa415
 
78797ac
537fcab
78797ac
 
 
 
 
 
 
d1ff7ba
78797ac
 
537fcab
 
 
d1ff7ba
78797ac
d1ff7ba
78797ac
537fcab
78797ac
537fcab
 
78797ac
d1ff7ba
78797ac
 
 
248d80d
78797ac
 
d1ff7ba
 
4dfa590
5faa415
 
78797ac
5faa415
537fcab
78797ac
d1ff7ba
78797ac
d1ff7ba
79b5725
78797ac
5faa415
78797ac
 
d1ff7ba
5faa415
 
 
 
78797ac
 
 
d1ff7ba
78797ac
d1ff7ba
78797ac
d1ff7ba
78797ac
537fcab
78797ac
537fcab
78797ac
537fcab
78797ac
537fcab
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
title: VacAIgent
emoji: 🐨
colorFrom: yellow
colorTo: purple
sdk: streamlit
sdk_version: 1.45.1
app_file: app.py
pinned: false
license: mit
short_description: Let AI agents plan your next vacation!
---

# 🏖️ VacAIgent: Let AI agents plan your next vacation!

VacAIgent leverages the CrewAI agentic framework to automate and enhance the trip planning experience, integrating a user-friendly Streamlit interface. This project demonstrates how autonomous AI agents can collaborate and execute complex tasks efficiently for the purpose of planning a vacation. It takes advantage of the inference endpoint called [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference) with an OpenAI-compatible API key.

_Forked and enhanced from the_ [_crewAI examples repository_](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner). You can find the application hosted on Hugging Face Spaces [here](https://huggingface.co/spaces/Intel/vacaigent):

[![](images/hf_vacaigent.png)](https://huggingface.co/spaces/Intel/vacaigent)

**Check out the video below for a code walkthrough, and steps written out below** 👇

<a href="https://youtu.be/nKG_kbQUDDE">
  <img src="https://img.youtube.com/vi/nKG_kbQUDDE/hqdefault.jpg" alt="Watch the video" width="100%">
</a>

(_Trip example originally developed by [@joaomdmoura](https://x.com/joaomdmoura)_)

## Installing and Using the Application

### Pre-Requisites
1. Get the API key from [scrapingant](https://scrapingant.com/) for HTML web-scraping.
2. Get the API from [serper]( https://serper.dev/) for Google Search API.
3. Bring your OpenAI-compatible API key
4. Bring your model endpoint URL and LLM model ID

### Installation steps

To host the interface locally, first, clone the repository:
```sh
git clone https://huggingface.co/spaces/Intel/vacaigent
cd vacaigent
```
Then, install the necessary libraries:
```sh
pip install -r requirements.txt
```
Add Streamlit secrets. Create a `.streamlit/secrets.toml` file and update the variables below:

```sh
SERPER_API_KEY="serper-api-key"
SCRAPINGANT_API_KEY="scrapingant_api_key"
OPENAI_API_KEY="openai_api_key"
MODEL_ID="meta-llama/Llama-3.3-70B-Instruct"
MODEL_BASE_URL="https://api.inference.denvrdata.com/v1/"
```

Here we are using the model [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) by default, and the model endpoint is hosted on Denvr Dataworks; but you can bring your own  OpenAI-compatible API key, model ID, and model endpoint URL.

**Note**: You can alternatively add these secrets directly to Hugging Face Spaces Secrets, under the Settings tab, if deploying the Streamlit application directly on Hugging Face.

### Run the application
To run the application locally, execute this command to pull up a Streamlit interface in your web browser:
```sh
streamlit run app.py
```

### Components:
  - [trip_tasks.py](trip_tasks.py): Contains task prompts for the agents.
  - [trip_agents.py](trip_agents.py): Manages the creation of agents.
  - [tools](tools) directory: Houses tool classes used by agents.
  - [app.py](app.py): The heart of the frontend Streamlit app.

## Using Local Models with Ollama

For enhanced privacy and customization, you could easily substitute cloud-hosted models with locally-hosted models from [Ollama](https://ollama.com/). 

## License

VacAIgent is open-sourced under the MIT license.

## Follow Up

Connect to LLMs on Intel Gaudi AI accelerators with just an endpoint and an OpenAI-compatible API key, using the inference endpoint [Intel® AI for Enterprise Inference](https://github.com/opea-project/Enterprise-Inference), powered by OPEA. At the time of writing, the endpoint is available on cloud provider [Denvr Dataworks](https://www.denvrdata.com/intel). 

Chat with 6K+ fellow developers on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t).

Follow [Intel Software on LinkedIn](https://www.linkedin.com/showcase/intel-software/).

For more Intel AI developer resources, see [developer.intel.com/ai](https://developer.intel.com/ai).