Javier-Jimenez99 commited on
Commit
ad1dc01
·
1 Parent(s): 013a28b

Actualizar la lectura del archivo README.md y eliminar la sección de metadatos de Hugging Face si está presente.

Browse files
Files changed (2) hide show
  1. app.py +6 -1
  2. realREADME.md +0 -69
app.py CHANGED
@@ -353,8 +353,13 @@ api_demo = gr.Interface(
353
  outputs="text", title="OwlBear Agent - Original API"
354
  )
355
 
356
- with open("realREADME.md", "r", encoding="utf-8") as f:
357
  readme = f.read()
 
 
 
 
 
358
 
359
  # Crear un bloque para contener el Markdown, en lugar de usar Markdown directamente
360
  with gr.Blocks() as intro_demo:
 
353
  outputs="text", title="OwlBear Agent - Original API"
354
  )
355
 
356
+ with open("README.md", "r", encoding="utf-8") as f:
357
  readme = f.read()
358
+ # Eliminar la sección de metadatos de Hugging Face si está presente
359
+ if readme.startswith("---"):
360
+ parts = readme.split("---", 2)
361
+ if len(parts) >= 3:
362
+ readme = parts[2]
363
 
364
  # Crear un bloque para contener el Markdown, en lugar de usar Markdown directamente
365
  with gr.Blocks() as intro_demo:
realREADME.md DELETED
@@ -1,69 +0,0 @@
1
- # 🧙‍♂️ LLM Game Master Agent 🐉
2
-
3
- ## 🎥 Video Demo
4
- <iframe width="560" height="315" src="https://www.youtube.com/embed/SlbW-kjekBg?si=r6x8GeVnKLipriZL" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
5
-
6
- Video Demo: [https://www.youtube.com/watch?v=8b1k2g3Z4aY](https://www.youtube.com/watch?v=8b1k2g3Z4aY)
7
-
8
- ## 🧩 Complete Architecture Overview
9
- ![Architecture Overview](https://huggingface.co/spaces/Agents-MCP-Hackathon/LLM-GameMaster-Agent/resolve/main/media/architecture.png)
10
-
11
- ## 🔗 Useful Links:
12
- - [Owlbear Rodeo Chat Interface](https://github.com/Agamador/OwlBear-llm-chat)
13
- - [MCP server](https://huggingface.co/spaces/Agents-MCP-Hackathon/LLM-GameMaster-MPC-Server)
14
- - [Video Demo](https://www.youtube.com/watch?v=8b1k2g3Z4aY)
15
-
16
- ## 🌟 Introduction
17
-
18
- The **LLM Game Master Agent** is a sophisticated AI system designed as a Game Master (GM) for solo medieval fantasy role-playing sessions. This cutting-edge application showcases the power of LangGraph React architecture combined with Model Context Protocol (MCP) technology, creating an immersive and highly adaptive gaming experience unlike anything seen before.
19
-
20
- Unlike conventional chatbots, this intelligent agent generates dynamic and personalized narratives where YOU become the protagonist in epic fantasy stories. The application leverages state-of-the-art language models to deliver a gaming experience comparable to traditional sessions with a human Game Master, but with the added benefits of AI-powered adaptability and endless creative possibilities.
21
-
22
- ## 🧠 LangGraph React: The System Core
23
-
24
- The LLM Game Master Agent utilizes [LangGraph](https://github.com/langchain-ai/langgraph) as the central component of its architecture, implementing the React pattern for complex task management.
25
-
26
- The implementation uses LangGraph's `create_react_agent` function to create a reactive agent that can maintain conversation state, reason over multiple steps, and make informed decisions based on the complete tools execution trace.
27
-
28
- ![React Agent Diagram](https://huggingface.co/spaces/Agents-MCP-Hackathon/LLM-GameMaster-Agent/resolve/main/media/reactAgent.png)
29
-
30
- ## 🔌 MCP Client: Integration with External Tools
31
-
32
- The system implements a [Model Context Protocol (MCP)](https://github.com/microsoft/model-context-protocol) client that connects to an external MCP server. This client-server architecture allows the agent to access specialized gaming tools without implementing them directly in the codebase.
33
-
34
- The implementation uses MCP-specific adapters for LangChain that facilitate communication between the agent and the tools server.
35
-
36
- This architecture separates the agent logic from the tool implementation, making the system more modular and easier to maintain. The agent can invoke tools as needed through the MCP connection, while focusing on its core narrative generation and decision-making capabilities.
37
-
38
- ## 🤖 Language Model Orchestration
39
-
40
- The system uses [LangChain](https://www.langchain.com/) to orchestrate language models, offering compatibility with:
41
-
42
- - **Anthropic Claude**: Claude 3 models via API
43
- - **Ollama**: Local deployment of models for self-hosted scenarios
44
-
45
- This flexibility allows selecting the most suitable model based on performance requirements and availability.
46
-
47
- ## 🖥️ Gradio User Interface
48
-
49
- The application features a complete web interface built with Gradio, offering two main views:
50
-
51
- 1. **Complete View with History**: Shows the conversation along with detailed execution tracking including tools and agent messages.
52
- 2. **Original API**: A simpler interface for API access.
53
-
54
- The interface includes features for:
55
-
56
- - Tracking multiple sessions via tab IDs
57
- - Detailed visualization of tool calls and their results
58
- - Session management controls
59
- - API key configuration
60
-
61
- The execution history provides complete transparency into the agent's decision-making process, showing each step of the interaction between the user, agent, and tools.
62
-
63
-
64
- ## 🔗 Links & Resources
65
-
66
- - [LangGraph Documentation](https://github.com/langchain-ai/langgraph)
67
- - [Model Context Protocol (MCP)](https://github.com/microsoft/model-context-protocol)
68
- - [LangChain](https://www.langchain.com/)
69
- - [Gradio](https://www.gradio.app/)