fabiochiu commited on
Commit
93b922f
·
1 Parent(s): 9995b11

clean code and update readme

Browse files
Files changed (4) hide show
  1. .gitignore +4 -2
  2. README.md +20 -43
  3. app.py +84 -132
  4. requirements.txt +0 -0
.gitignore CHANGED
@@ -1,3 +1,5 @@
1
- venv
2
- .env
3
  .DS_Store
 
 
 
 
 
 
 
1
  .DS_Store
2
+ .env
3
+ data/*
4
+ ~data/.gitkeep
5
+ venv
README.md CHANGED
@@ -1,59 +1,36 @@
1
- # Towards AI 🤖: An AI Question-Answering Bot
2
 
3
  ## Overview
4
 
5
- **Towards AI 🤖** is a question-answering bot designed to assist students with queries related to Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). It leverages Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) techniques to provide insightful answers, utilizing a vector database for efficient retrieval of knowledge.
6
 
7
- ## Features
8
 
9
- - AI, ML, and DL question-answering capabilities.
10
- - Integration with ChromaDB for persistent storage.
11
- - Utilizes OpenAI's models for generating responses.
12
- - Gradio interface for easy interaction.
13
- - Memory management for maintaining conversation context.
14
 
15
- ## Requirements
16
 
17
- Make sure you have installed the dependencies from requirements.txt file.
18
 
19
  ```bash
20
- pip install -r requirements.txt
21
  ```
22
 
23
- ## Setup
24
-
25
- 1. **Clone the Repository**
26
-
27
- ```bash
28
- git clone https://huggingface.co/yourusername/towards-ai
29
- cd towards-ai
30
- ```
31
-
32
- 2. **Environment Variables**
33
-
34
- Create a .env file in the project root and set the following variables:
35
- ```bash
36
- OPENAI_API_KEY=
37
- LOGFIRE_TOKEN=
38
- COHERE_API_KEY=
39
- MONGODB_URI=
40
- DB_NAME=ai_tutor_knowledge
41
- ```
42
- 3. **Download the Vector Database**
43
-
44
- The bot requires a pre-trained vector database. If it doesn't exist locally, it will automatically download it from Hugging Face Hub when you run the code.
45
-
46
- 4. **Usage**
47
 
48
- To start the chatbot, run the following command:
49
- ```bash
50
- python app.py
51
- ```
52
 
53
- 5. **Gradio Interface**
54
 
55
- Once the application is running, you can access the chatbot interface at http://localhost:7860.
 
 
56
 
57
- 6. **Interacting with the Bot**
58
 
59
- You can ask the bot any question related to AI, ML, or DL. The bot is designed to provide clear, complete answers based on its knowledge base.
 
 
 
1
+ # Starting Point for the Final Project of the "From Beginner to Advanced LLM Developer" course
2
 
3
  ## Overview
4
 
5
+ This repository contains the code of the final "Part 4; Building Your Own advanced LLM + RAG Project to receive certification" lesson of the "From Beginner to Advanced LLM Developer" course.
6
 
7
+ Congrats, you are at the last step of the course! In this final project you'll have the possibility to practice with all the techniques that you learned and earn your certification.
8
 
9
+ If you want, you can use this repository as starting point for your final project. The code here is the same as in the "Building and Deploying a Gradio UI on Hugging Face Spaces" lesson, so you should be already familiar with it. If you want to use it for your project, fork this repository here on GitHub. By doing so, you'll create a copy of this repository in your GitHub account that you can modify as you want.
 
 
 
 
10
 
11
+ ## Setup
12
 
13
+ 1. Create a `.env` file and add there your OpenAI API key. Its content should be something like:
14
 
15
  ```bash
16
+ OPENAI_API_KEY="sk-..."
17
  ```
18
 
19
+ 2. Create a local virtual environment, for example using the `venv` module. Then, activate it.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
+ ```bash
22
+ python -m venv venv
23
+ source venv/bin/activate
24
+ ```
25
 
26
+ 3. Install the dependencies.
27
 
28
+ ```bash
29
+ pip install -r requirements.txt
30
+ ```
31
 
32
+ 4. Launch the Gradio app.
33
 
34
+ ```bash
35
+ python app.py
36
+ ```
app.py CHANGED
@@ -4,21 +4,17 @@ import os
4
 
5
  # Third-party Imports
6
  from dotenv import load_dotenv
7
- from pymongo.mongo_client import MongoClient
8
- from pymongo.server_api import ServerApi
9
  import chromadb
10
  import gradio as gr
11
- import logfire
12
 
13
  # LlamaIndex (Formerly GPT Index) Imports
14
  from llama_index.core import VectorStoreIndex
15
- from llama_index.core.node_parser import SentenceSplitter
16
  from llama_index.core.retrievers import VectorIndexRetriever
17
  from llama_index.vector_stores.chroma import ChromaVectorStore
18
- from llama_index.postprocessor.cohere_rerank import CohereRerank
19
  from llama_index.core.llms import MessageRole
20
  from llama_index.core.memory import ChatSummaryMemoryBuffer
21
- from llama_index.core.tools import RetrieverTool, ToolMetadata,QueryEngineTool
22
  from llama_index.agent.openai import OpenAIAgent
23
  from llama_index.embeddings.openai import OpenAIEmbedding
24
  from llama_index.llms.openai import OpenAI
@@ -29,52 +25,47 @@ load_dotenv()
29
  logger = logging.getLogger(__name__)
30
  logging.basicConfig(level=logging.INFO)
31
  logging.getLogger("httpx").setLevel(logging.WARNING)
32
- logfire.configure()
33
 
34
- system_message_openai_agent = """You are an AI teacher, answering questions from students of an applied AI course on Large Language Models (LLMs or llm) and Retrieval Augmented Generation (RAG) for LLMs.
35
  Topics covered include training models, fine-tuning models, giving memory to LLMs, prompting tips, hallucinations and bias, vector databases, transformer architectures, embeddings, RAG frameworks such as
36
  Langchain and LlamaIndex, making LLMs interact with tools, AI agents, reinforcement learning with human feedback (RLHF). Questions should be understood in this context. Your answers are aimed to teach
37
- students, so they should be complete, clear, and easy to understand. Use the available tools to gather insights pertinent to the field of AI. To answer student questions, always use the all_sources_info
38
- tool plus another one simultaneously. Decompose the user question into TWO sub questions (you are limited to two sub-questions) one for each tool. Meaning that should be using two tools in total for each user question.
39
 
40
- These are the guidelines to consider if you decide to create sub questions:
41
- * Be as specific as possible
42
- * The two sub questions should be relevant to the user question
43
- * The two sub questions should be answerable by the tools provided
44
-
45
- Only some information returned by the tools might be relevant to the question, so ignore the irrelevant part and answer the question with what you have. Your responses are exclusively based on the output provided
46
- by the tools. Refrain from incorporating information not directly obtained from the tool's responses. When the conversation deepens or shifts focus within a topic, adapt your input to the tools to reflect these nuances.
47
- This means if a user requests further elaboration on a specific aspect of a previously discussed topic, you should reformulate your input to the tool to capture this new angle or more profound layer of inquiry. Provide
48
  comprehensive answers, ideally structured in multiple paragraphs, drawing from the tool's variety of relevant details. The depth and breadth of your responses should align with the scope and specificity of the information retrieved.
49
- Should the tools repository lack information on the queried topic, politely inform the user that the question transcends the bounds of your current knowledge base, citing the absence of relevant content in the tool's documentation.
50
- At the end of your answers, always invite the students to ask deeper questions about the topic if they have any. Make sure reformulate the question to the tool to capture this new angle or more profound layer of inquiry.
51
  Do not refer to the documentation directly, but use the information provided within it to answer questions. If code is provided in the information, share it with the students. It's important to provide complete code blocks so
52
  they can execute the code when they copy and paste them. Make sure to format your answers in Markdown format, including code blocks and snippets.
53
  """
 
54
  TEXT_QA_TEMPLATE = """
55
- You must answer only related to AI, ML, Deep Learning and related concept queries. You should not
56
- answer on your own, Should answer from the retrieved chunks. If the query is not relevant to AI, you don't know the answer.
 
57
  """
58
 
59
 
60
- if not os.path.exists("data/ai_tutor_knowledge"):
61
- os.makedirs("data/ai_tutor_knowledge")
62
- # Download the vector database from the Hugging Face Hub if it doesn't exist locally
63
- # https://huggingface.co/datasets/jaiganesan/ai_tutor_knowledge_vector_Store/tree/main
64
- logfire.warn(
65
- f"Vector database does not exist at 'data/ai_tutor_knowledge', downloading from Hugging Face Hub"
66
- )
67
- from huggingface_hub import snapshot_download
68
 
69
- snapshot_download(
70
- repo_id="jaiganesan/ai_tutor_knowledge_vector_Store",
71
- local_dir="data",
72
- repo_type="dataset",
73
- )
74
- logfire.info(f"Downloaded vector database to 'data/ai_tutor_knowledge'")
 
 
 
75
 
76
 
77
- def setup_database(db_collection):
78
  db = chromadb.PersistentClient(path=f"data/{db_collection}")
79
  chroma_collection = db.get_or_create_collection(db_collection)
80
  vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
@@ -91,48 +82,12 @@ def setup_database(db_collection):
91
  embed_model=Settings.embed_model,
92
  use_async=True,
93
  )
94
-
95
- cohere_reranker = CohereRerank(top_n=7, model="embed-english-v3.0")
96
-
97
- index_query_engine = index.as_query_engine(
98
- llm=Settings.llm,
99
- text_qa_template=TEXT_QA_TEMPLATE,
100
- streaming=True,
101
- # node_postprocessors=[cohere_reranker],
102
- )
103
- return index_query_engine, vector_retriever
104
-
105
-
106
- DB_NAME = os.getenv("DB_NAME", "ai_tutor_knowledge")
107
- DB_PATH = os.getenv("DB_PATH", f"scripts/{DB_NAME}")
108
-
109
- query_engine, vector_retriever = setup_database(DB_NAME)
110
-
111
- # Constants
112
- CONCURRENCY_COUNT = int(os.getenv("CONCURRENCY_COUNT", 64))
113
-
114
-
115
- __all__ = [
116
- "CONCURRENCY_COUNT",
117
- ]
118
-
119
-
120
- def update_query_engine_tools(query_engine_, vector_retriever_):
121
-
122
  tools = [
123
- # QueryEngineTool(
124
- # query_engine=query_engine_,
125
- # metadata=ToolMetadata(
126
- # name="AI_information",
127
- # description="""The 'AI_information' tool serves as a comprehensive repository for insights into
128
- # the field of artificial intelligence.""",
129
- # ),
130
- # ),
131
  RetrieverTool(
132
- retriever=vector_retriever_,
133
  metadata=ToolMetadata(
134
  name="AI_Information_related_resources",
135
- description="Retriever retrieves AI, ML, DL related information from Vector store collection.",
136
  ),
137
  )
138
  ]
@@ -140,75 +95,72 @@ def update_query_engine_tools(query_engine_, vector_retriever_):
140
 
141
 
142
  def generate_completion(query, history, memory):
143
- with logfire.span("Running query"):
144
- logfire.info(f"User query: {query}")
145
-
146
- chat_list = memory.get()
147
-
148
- if len(chat_list) != 0:
149
- user_index = [i for i, msg in enumerate(chat_list) if msg.role == MessageRole.USER]
150
- if len(user_index) > len(history):
151
- user_index_to_remove = user_index[len(history)]
152
- chat_list = chat_list[:user_index_to_remove]
153
- memory.set(chat_list)
154
-
155
- logfire.info(f"chat_history: {len(memory.get())} {memory.get()}")
156
- logfire.info(f"gradio_history: {len(history)} {history}")
157
-
158
- llm = OpenAI(temperature=1, model="gpt-4o-mini", max_tokens=None)
159
-
160
- client = llm._get_client()
161
- logfire.instrument_openai(client)
162
-
163
- agent_tools = update_query_engine_tools(query_engine, vector_retriever)
164
-
165
- agent = OpenAIAgent.from_tools(
166
- llm=Settings.llm,
167
- memory=memory,
168
- tools=agent_tools,
169
- system_prompt=system_message_openai_agent,
170
- )
171
 
 
172
  completion = agent.stream_chat(query)
173
-
174
  answer_str = ""
175
  for token in completion.response_gen:
176
  answer_str += token
177
  yield answer_str
178
 
179
- def vote(data: gr.LikeData):
180
- pass
181
- def save_completion(completion, history):
182
- pass
183
 
184
- with gr.Blocks(
185
- fill_height=True,
186
- title="Towards AI 🤖",
187
- analytics_enabled=True,
188
- ) as demo:
 
189
 
190
- memory_state = gr.State(
191
- lambda: ChatSummaryMemoryBuffer.from_defaults(
192
- token_limit=120000,
 
 
 
 
 
 
 
193
  )
194
- )
195
- chatbot = gr.Chatbot(
196
- scale=1,
197
- placeholder="<strong>Towards AI 🤖: A Question-Answering Bot for anything AI-related</strong><br>",
198
- show_label=False,
199
- likeable=True,
200
- show_copy_button=True,
201
- )
202
- chatbot.like(vote, None, None)
203
 
204
- gr.ChatInterface(
205
- fn=generate_completion,
206
- chatbot=chatbot,
207
- additional_inputs=[memory_state],
208
- )
 
 
 
 
209
 
210
  if __name__ == "__main__":
 
 
 
 
211
  Settings.llm = OpenAI(temperature=0, model="gpt-4o-mini")
212
  Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
213
- demo.queue(default_concurrency_limit=CONCURRENCY_COUNT)
214
- demo.launch(debug=True, share=True)
 
 
4
 
5
  # Third-party Imports
6
  from dotenv import load_dotenv
 
 
7
  import chromadb
8
  import gradio as gr
9
+ from huggingface_hub import snapshot_download
10
 
11
  # LlamaIndex (Formerly GPT Index) Imports
12
  from llama_index.core import VectorStoreIndex
 
13
  from llama_index.core.retrievers import VectorIndexRetriever
14
  from llama_index.vector_stores.chroma import ChromaVectorStore
 
15
  from llama_index.core.llms import MessageRole
16
  from llama_index.core.memory import ChatSummaryMemoryBuffer
17
+ from llama_index.core.tools import RetrieverTool, ToolMetadata
18
  from llama_index.agent.openai import OpenAIAgent
19
  from llama_index.embeddings.openai import OpenAIEmbedding
20
  from llama_index.llms.openai import OpenAI
 
25
  logger = logging.getLogger(__name__)
26
  logging.basicConfig(level=logging.INFO)
27
  logging.getLogger("httpx").setLevel(logging.WARNING)
 
28
 
29
+ PROMPT_SYSTEM_MESSAGE = """You are an AI teacher, answering questions from students of an applied AI course on Large Language Models (LLMs or llm) and Retrieval Augmented Generation (RAG) for LLMs.
30
  Topics covered include training models, fine-tuning models, giving memory to LLMs, prompting tips, hallucinations and bias, vector databases, transformer architectures, embeddings, RAG frameworks such as
31
  Langchain and LlamaIndex, making LLMs interact with tools, AI agents, reinforcement learning with human feedback (RLHF). Questions should be understood in this context. Your answers are aimed to teach
32
+ students, so they should be complete, clear, and easy to understand. Use the available tools to gather insights pertinent to the field of AI.
33
+ To find relevant information for answering student questions, always use the "AI_Information_related_resources" tool.
34
 
35
+ Only some information returned by the tool might be relevant to the question, so ignore the irrelevant part and answer the question with what you have. Your responses are exclusively based on the output provided
36
+ by the tools. Refrain from incorporating information not directly obtained from the tool's responses.
37
+ If a user requests further elaboration on a specific aspect of a previously discussed topic, you should reformulate your input to the tool to capture this new angle or more profound layer of inquiry. Provide
 
 
 
 
 
38
  comprehensive answers, ideally structured in multiple paragraphs, drawing from the tool's variety of relevant details. The depth and breadth of your responses should align with the scope and specificity of the information retrieved.
39
+ Should the tool response lack information on the queried topic, politely inform the user that the question transcends the bounds of your current knowledge base, citing the absence of relevant content in the tool's documentation.
40
+ At the end of your answers, always invite the students to ask deeper questions about the topic if they have any.
41
  Do not refer to the documentation directly, but use the information provided within it to answer questions. If code is provided in the information, share it with the students. It's important to provide complete code blocks so
42
  they can execute the code when they copy and paste them. Make sure to format your answers in Markdown format, including code blocks and snippets.
43
  """
44
+
45
  TEXT_QA_TEMPLATE = """
46
+ You must answer only related to AI, ML, Deep Learning and related concepts queries.
47
+ Always leverage the retrieved documents to answer the questions, don't answer them on your own.
48
+ If the query is not relevant to AI, say that you don't know the answer.
49
  """
50
 
51
 
52
+ def download_knowledge_base_if_not_exists():
53
+ """Download the knowledge base from the Hugging Face Hub if it doesn't exist locally"""
54
+ if not os.path.exists("data/ai_tutor_knowledge"):
55
+ os.makedirs("data/ai_tutor_knowledge")
 
 
 
 
56
 
57
+ logging.warning(
58
+ f"Vector database does not exist at 'data/', downloading from Hugging Face Hub..."
59
+ )
60
+ snapshot_download(
61
+ repo_id="jaiganesan/ai_tutor_knowledge_vector_Store",
62
+ local_dir="data/ai_tutor_knowledge",
63
+ repo_type="dataset",
64
+ )
65
+ logging.info(f"Downloaded vector database to 'data/ai_tutor_knowledge'")
66
 
67
 
68
+ def get_tools(db_collection="ai_tutor_knowledge"):
69
  db = chromadb.PersistentClient(path=f"data/{db_collection}")
70
  chroma_collection = db.get_or_create_collection(db_collection)
71
  vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
 
82
  embed_model=Settings.embed_model,
83
  use_async=True,
84
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  tools = [
 
 
 
 
 
 
 
 
86
  RetrieverTool(
87
+ retriever=vector_retriever,
88
  metadata=ToolMetadata(
89
  name="AI_Information_related_resources",
90
+ description="Useful for info related to artificial intelligence, ML, deep learning. It gathers the info from local data.",
91
  ),
92
  )
93
  ]
 
95
 
96
 
97
  def generate_completion(query, history, memory):
98
+ logging.info(f"User query: {query}")
99
+
100
+ # Manage memory
101
+ chat_list = memory.get()
102
+ if len(chat_list) != 0:
103
+ user_index = [i for i, msg in enumerate(chat_list) if msg.role == MessageRole.USER]
104
+ if len(user_index) > len(history):
105
+ user_index_to_remove = user_index[len(history)]
106
+ chat_list = chat_list[:user_index_to_remove]
107
+ memory.set(chat_list)
108
+ logging.info(f"chat_history: {len(memory.get())} {memory.get()}")
109
+ logging.info(f"gradio_history: {len(history)} {history}")
110
+
111
+ # Create agent
112
+ tools = get_tools(db_collection="ai_tutor_knowledge")
113
+ agent = OpenAIAgent.from_tools(
114
+ llm=Settings.llm,
115
+ memory=memory,
116
+ tools=tools,
117
+ system_prompt=PROMPT_SYSTEM_MESSAGE,
118
+ )
 
 
 
 
 
 
 
119
 
120
+ # Generate answer
121
  completion = agent.stream_chat(query)
 
122
  answer_str = ""
123
  for token in completion.response_gen:
124
  answer_str += token
125
  yield answer_str
126
 
 
 
 
 
127
 
128
+ def launch_ui():
129
+ with gr.Blocks(
130
+ fill_height=True,
131
+ title="AI Tutor 🤖",
132
+ analytics_enabled=True,
133
+ ) as demo:
134
 
135
+ memory_state = gr.State(
136
+ lambda: ChatSummaryMemoryBuffer.from_defaults(
137
+ token_limit=120000,
138
+ )
139
+ )
140
+ chatbot = gr.Chatbot(
141
+ scale=1,
142
+ placeholder="<strong>AI Tutor 🤖: A Question-Answering Bot for anything AI-related</strong><br>",
143
+ show_label=False,
144
+ show_copy_button=True,
145
  )
 
 
 
 
 
 
 
 
 
146
 
147
+ gr.ChatInterface(
148
+ fn=generate_completion,
149
+ chatbot=chatbot,
150
+ additional_inputs=[memory_state],
151
+ )
152
+
153
+ demo.queue(default_concurrency_limit=64)
154
+ demo.launch(debug=True, share=False) # Set share=True to share the app online
155
+
156
 
157
  if __name__ == "__main__":
158
+ # Download the knowledge base if it doesn't exist
159
+ download_knowledge_base_if_not_exists()
160
+
161
+ # Set up llm and embedding model
162
  Settings.llm = OpenAI(temperature=0, model="gpt-4o-mini")
163
  Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small")
164
+
165
+ # launch the UI
166
+ launch_ui()
requirements.txt CHANGED
Binary files a/requirements.txt and b/requirements.txt differ