Commit
·
20808c6
1
Parent(s):
769b5b1
Actualizar README.md y crear realREADME.md con una introducción detallada del agente, enlaces útiles y una descripción de la arquitectura. Modificar la interfaz para incluir un bloque de Markdown y mejorar la navegación.
Browse files- README.md +5 -2
- app.py +5 -8
- realREADME.md +64 -0
README.md
CHANGED
@@ -13,7 +13,10 @@ tags:
|
|
13 |
|
14 |
# 🧙♂️ LLM Game Master Agent 🐉
|
15 |
|
16 |
-
##
|
|
|
|
|
|
|
17 |
- [Owlbear Rodeo Chat Interface](https://github.com/Agamador/OwlBear-llm-chat)
|
18 |
- [MCP server](https://huggingface.co/spaces/Agents-MCP-Hackathon/LLM-GameMaster-MPC-Server)
|
19 |
- [Video Demo](https://www.youtube.com/watch?v=8b1k2g3Z4aY)
|
@@ -30,7 +33,7 @@ The LLM Game Master Agent utilizes [LangGraph](https://github.com/langchain-ai/l
|
|
30 |
|
31 |
The implementation uses LangGraph's `create_react_agent` function to create a reactive agent that can maintain conversation state, reason over multiple steps, and make informed decisions based on the complete tools execution trace.
|
32 |
|
33 |
-

|
18 |
+
|
19 |
+
## 🔗 Useful Links:
|
20 |
- [Owlbear Rodeo Chat Interface](https://github.com/Agamador/OwlBear-llm-chat)
|
21 |
- [MCP server](https://huggingface.co/spaces/Agents-MCP-Hackathon/LLM-GameMaster-MPC-Server)
|
22 |
- [Video Demo](https://www.youtube.com/watch?v=8b1k2g3Z4aY)
|
|
|
33 |
|
34 |
The implementation uses LangGraph's `create_react_agent` function to create a reactive agent that can maintain conversation state, reason over multiple steps, and make informed decisions based on the complete tools execution trace.
|
35 |
|
36 |
+

|
37 |
|
38 |
## 🔌 MCP Client: Integration with External Tools
|
39 |
|
app.py
CHANGED
@@ -353,20 +353,17 @@ api_demo = gr.Interface(
|
|
353 |
outputs="text", title="OwlBear Agent - Original API"
|
354 |
)
|
355 |
|
356 |
-
with open("
|
357 |
readme = f.read()
|
358 |
|
359 |
-
|
360 |
-
|
361 |
-
|
362 |
-
description="This interface introduces the Agent",
|
363 |
-
api_name=False,
|
364 |
-
)
|
365 |
|
366 |
# Combined interface with tabs
|
367 |
combined_demo = gr.TabbedInterface(
|
368 |
[intro_demo, demo, api_demo],
|
369 |
-
["Complete View with History", "Original API"],
|
370 |
title="🧙🏼♂️ LLM Game Master - Agent"
|
371 |
)
|
372 |
|
|
|
353 |
outputs="text", title="OwlBear Agent - Original API"
|
354 |
)
|
355 |
|
356 |
+
with open("realREADME.md", "r", encoding="utf-8") as f:
|
357 |
readme = f.read()
|
358 |
|
359 |
+
# Crear un bloque para contener el Markdown, en lugar de usar Markdown directamente
|
360 |
+
with gr.Blocks() as intro_demo:
|
361 |
+
gr.Markdown(readme)
|
|
|
|
|
|
|
362 |
|
363 |
# Combined interface with tabs
|
364 |
combined_demo = gr.TabbedInterface(
|
365 |
[intro_demo, demo, api_demo],
|
366 |
+
["README", "Complete View with History", "Original API"],
|
367 |
title="🧙🏼♂️ LLM Game Master - Agent"
|
368 |
)
|
369 |
|
realREADME.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 🧙♂️ LLM Game Master Agent 🐉
|
2 |
+
|
3 |
+
## 🧩 Complete Architecture Overview
|
4 |
+

|
5 |
+
|
6 |
+
## 🔗 Useful Links:
|
7 |
+
- [Owlbear Rodeo Chat Interface](https://github.com/Agamador/OwlBear-llm-chat)
|
8 |
+
- [MCP server](https://huggingface.co/spaces/Agents-MCP-Hackathon/LLM-GameMaster-MPC-Server)
|
9 |
+
- [Video Demo](https://www.youtube.com/watch?v=8b1k2g3Z4aY)
|
10 |
+
|
11 |
+
## 🌟 Introduction
|
12 |
+
|
13 |
+
The **LLM Game Master Agent** is a sophisticated AI system designed as a Game Master (GM) for solo medieval fantasy role-playing sessions. This cutting-edge application showcases the power of LangGraph React architecture combined with Model Context Protocol (MCP) technology, creating an immersive and highly adaptive gaming experience unlike anything seen before.
|
14 |
+
|
15 |
+
Unlike conventional chatbots, this intelligent agent generates dynamic and personalized narratives where YOU become the protagonist in epic fantasy stories. The application leverages state-of-the-art language models to deliver a gaming experience comparable to traditional sessions with a human Game Master, but with the added benefits of AI-powered adaptability and endless creative possibilities.
|
16 |
+
|
17 |
+
## 🧠 LangGraph React: The System Core
|
18 |
+
|
19 |
+
The LLM Game Master Agent utilizes [LangGraph](https://github.com/langchain-ai/langgraph) as the central component of its architecture, implementing the React pattern for complex task management.
|
20 |
+
|
21 |
+
The implementation uses LangGraph's `create_react_agent` function to create a reactive agent that can maintain conversation state, reason over multiple steps, and make informed decisions based on the complete tools execution trace.
|
22 |
+
|
23 |
+

|
24 |
+
|
25 |
+
## 🔌 MCP Client: Integration with External Tools
|
26 |
+
|
27 |
+
The system implements a [Model Context Protocol (MCP)](https://github.com/microsoft/model-context-protocol) client that connects to an external MCP server. This client-server architecture allows the agent to access specialized gaming tools without implementing them directly in the codebase.
|
28 |
+
|
29 |
+
The implementation uses MCP-specific adapters for LangChain that facilitate communication between the agent and the tools server.
|
30 |
+
|
31 |
+
This architecture separates the agent logic from the tool implementation, making the system more modular and easier to maintain. The agent can invoke tools as needed through the MCP connection, while focusing on its core narrative generation and decision-making capabilities.
|
32 |
+
|
33 |
+
## 🤖 Language Model Orchestration
|
34 |
+
|
35 |
+
The system uses [LangChain](https://www.langchain.com/) to orchestrate language models, offering compatibility with:
|
36 |
+
|
37 |
+
- **Anthropic Claude**: Claude 3 models via API
|
38 |
+
- **Ollama**: Local deployment of models for self-hosted scenarios
|
39 |
+
|
40 |
+
This flexibility allows selecting the most suitable model based on performance requirements and availability.
|
41 |
+
|
42 |
+
## 🖥️ Gradio User Interface
|
43 |
+
|
44 |
+
The application features a complete web interface built with Gradio, offering two main views:
|
45 |
+
|
46 |
+
1. **Complete View with History**: Shows the conversation along with detailed execution tracking including tools and agent messages.
|
47 |
+
2. **Original API**: A simpler interface for API access.
|
48 |
+
|
49 |
+
The interface includes features for:
|
50 |
+
|
51 |
+
- Tracking multiple sessions via tab IDs
|
52 |
+
- Detailed visualization of tool calls and their results
|
53 |
+
- Session management controls
|
54 |
+
- API key configuration
|
55 |
+
|
56 |
+
The execution history provides complete transparency into the agent's decision-making process, showing each step of the interaction between the user, agent, and tools.
|
57 |
+
|
58 |
+
|
59 |
+
## 🔗 Links & Resources
|
60 |
+
|
61 |
+
- [LangGraph Documentation](https://github.com/langchain-ai/langgraph)
|
62 |
+
- [Model Context Protocol (MCP)](https://github.com/microsoft/model-context-protocol)
|
63 |
+
- [LangChain](https://www.langchain.com/)
|
64 |
+
- [Gradio](https://www.gradio.app/)
|