Upload folder using huggingface_hub
Browse files- .gitignore +53 -0
- DEPLOYMENT.md +85 -0
- README.md +165 -8
- app.py +26 -0
- clarifier_agent.py +23 -0
- deep_research.py +619 -0
- email_agent.py +37 -0
- env_example.txt +14 -0
- evaluator_agent.py +84 -0
- metadata.json +22 -0
- planner_agent.py +28 -0
- requirements.txt +11 -0
- research_manager.py +506 -0
- search_agent.py +23 -0
- writer_agent.py +39 -0
.gitignore
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Environment variables
|
2 |
+
.env
|
3 |
+
.env.local
|
4 |
+
.env.production
|
5 |
+
|
6 |
+
# Python
|
7 |
+
__pycache__/
|
8 |
+
*.py[cod]
|
9 |
+
*$py.class
|
10 |
+
*.so
|
11 |
+
.Python
|
12 |
+
build/
|
13 |
+
develop-eggs/
|
14 |
+
dist/
|
15 |
+
downloads/
|
16 |
+
eggs/
|
17 |
+
.eggs/
|
18 |
+
lib/
|
19 |
+
lib64/
|
20 |
+
parts/
|
21 |
+
sdist/
|
22 |
+
var/
|
23 |
+
wheels/
|
24 |
+
pip-wheel-metadata/
|
25 |
+
share/python-wheels/
|
26 |
+
*.egg-info/
|
27 |
+
.installed.cfg
|
28 |
+
*.egg
|
29 |
+
|
30 |
+
# Gradio
|
31 |
+
.gradio/
|
32 |
+
gradio_cached_examples/
|
33 |
+
|
34 |
+
# Logs
|
35 |
+
*.log
|
36 |
+
|
37 |
+
# Database
|
38 |
+
*.db
|
39 |
+
*.sqlite3
|
40 |
+
|
41 |
+
# IDE
|
42 |
+
.vscode/
|
43 |
+
.idea/
|
44 |
+
*.swp
|
45 |
+
*.swo
|
46 |
+
|
47 |
+
# OS
|
48 |
+
.DS_Store
|
49 |
+
Thumbs.db
|
50 |
+
|
51 |
+
# Temporary files
|
52 |
+
*.tmp
|
53 |
+
*.temp
|
DEPLOYMENT.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 🚀 Deployment Guide for Hugging Face Spaces
|
2 |
+
|
3 |
+
## Quick Deploy to Hugging Face Spaces
|
4 |
+
|
5 |
+
### Option 1: Direct Upload
|
6 |
+
1. **Create a new Space** on [Hugging Face Spaces](https://huggingface.co/spaces)
|
7 |
+
2. **Choose "Gradio" as the SDK**
|
8 |
+
3. **Upload these files** from the `deep_research` folder:
|
9 |
+
- `app.py`
|
10 |
+
- `deep_research.py`
|
11 |
+
- `requirements.txt`
|
12 |
+
- `README.md`
|
13 |
+
- `metadata.json`
|
14 |
+
- All the agent files (`*_agent.py`, `research_manager.py`)
|
15 |
+
|
16 |
+
### Option 2: Git Repository
|
17 |
+
1. **Create a new repository** or fork this one
|
18 |
+
2. **Copy the `deep_research` folder contents** to the root of your repository
|
19 |
+
3. **Create a new Space** and connect it to your repository
|
20 |
+
|
21 |
+
## Environment Configuration
|
22 |
+
|
23 |
+
In your Hugging Face Space settings, add these secrets:
|
24 |
+
|
25 |
+
### Required
|
26 |
+
- `OPENAI_API_KEY`: Your OpenAI API key
|
27 |
+
|
28 |
+
### Optional (for email functionality)
|
29 |
+
- `SENDGRID_API_KEY`: Your SendGrid API key
|
30 |
+
- `SENDGRID_FROM_EMAIL`: Your verified sender email
|
31 |
+
|
32 |
+
## Files Structure for Deployment
|
33 |
+
|
34 |
+
```
|
35 |
+
your-space/
|
36 |
+
├── app.py # Main entry point for HF Spaces
|
37 |
+
├── deep_research.py # Core application logic
|
38 |
+
├── requirements.txt # Python dependencies
|
39 |
+
├── README.md # Space documentation
|
40 |
+
├── metadata.json # HF Spaces configuration
|
41 |
+
├── research_manager.py # Research orchestration
|
42 |
+
├── clarifier_agent.py # Clarification agent
|
43 |
+
├── planner_agent.py # Planning agent
|
44 |
+
├── search_agent.py # Search agent
|
45 |
+
├── writer_agent.py # Writing agent
|
46 |
+
├── evaluator_agent.py # Quality evaluation agent
|
47 |
+
├── email_agent.py # Email delivery agent
|
48 |
+
├── .gitignore # Git ignore rules
|
49 |
+
└── env_example.txt # Environment variables template
|
50 |
+
```
|
51 |
+
|
52 |
+
## Testing Your Deployment
|
53 |
+
|
54 |
+
1. **Local Testing**: Run `python app.py` to test locally
|
55 |
+
2. **Check Dependencies**: Ensure all imports work correctly
|
56 |
+
3. **Environment Variables**: Test with your actual API keys
|
57 |
+
4. **Gradio Interface**: Verify the UI loads and functions work
|
58 |
+
|
59 |
+
## Common Issues & Solutions
|
60 |
+
|
61 |
+
### Import Errors
|
62 |
+
- Make sure all agent files are in the same directory
|
63 |
+
- Verify `openai-agents` package is installed correctly
|
64 |
+
|
65 |
+
### API Key Issues
|
66 |
+
- Check that environment variables are set correctly in HF Spaces
|
67 |
+
- Ensure OpenAI API key has sufficient credits
|
68 |
+
|
69 |
+
### Email Functionality
|
70 |
+
- Email features are optional and will be disabled if SendGrid isn't configured
|
71 |
+
- Verify your SendGrid sender email is verified
|
72 |
+
|
73 |
+
## Performance Tips
|
74 |
+
|
75 |
+
- The app uses OpenAI's Agents framework which can take 1-2 minutes for complex research
|
76 |
+
- Consider upgrading to a paid HF Spaces plan for better performance
|
77 |
+
- Monitor usage to avoid API rate limits
|
78 |
+
|
79 |
+
## Support
|
80 |
+
|
81 |
+
If you encounter issues:
|
82 |
+
1. Check the Space logs in Hugging Face
|
83 |
+
2. Verify all environment variables are set
|
84 |
+
3. Test locally first to isolate the issue
|
85 |
+
4. Check OpenAI API status and quotas
|
README.md
CHANGED
@@ -1,12 +1,169 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 5.34.2
|
8 |
app_file: app.py
|
9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
-
|
|
|
1 |
---
|
2 |
+
title: Deep_Research_Assistant
|
|
|
|
|
|
|
|
|
|
|
3 |
app_file: app.py
|
4 |
+
sdk: gradio
|
5 |
+
sdk_version: 5.29.0
|
6 |
+
---
|
7 |
+
# 🔍 Deep Research Assistant
|
8 |
+
|
9 |
+
[](https://huggingface.co/spaces)
|
10 |
+
[](https://gradio.app)
|
11 |
+
|
12 |
+
A comprehensive AI-powered research assistant that delivers high-quality, well-researched reports with built-in quality assurance and email delivery capabilities.
|
13 |
+
|
14 |
+
## 🚀 Features
|
15 |
+
|
16 |
+
### 🤖 Enhanced AI Research System
|
17 |
+
- **Quality Evaluation**: Every report is automatically assessed for completeness, accuracy, and clarity
|
18 |
+
- **Smart Optimization**: Reports scoring below 7/10 are automatically improved
|
19 |
+
- **Multi-Strategy Search**: Uses multiple search approaches for comprehensive coverage
|
20 |
+
- **Email Delivery**: Optional email delivery of research reports
|
21 |
+
|
22 |
+
### 🎯 Research Modes
|
23 |
+
|
24 |
+
1. **🚀 Interactive Research with Clarification** (Recommended)
|
25 |
+
- Generates clarifying questions to focus your research
|
26 |
+
- Provides more targeted and relevant results
|
27 |
+
- Uses the enhanced quality assurance pipeline
|
28 |
+
|
29 |
+
2. **🤖 Enhanced Direct Research**
|
30 |
+
- Advanced AI system with automatic quality evaluation
|
31 |
+
- Iterative improvement when needed
|
32 |
+
- Full traceability with OpenAI traces
|
33 |
+
|
34 |
+
3. **⚡ Quick Research**
|
35 |
+
- Fast research for simple queries
|
36 |
+
- Legacy compatibility mode
|
37 |
+
- Good for straightforward questions
|
38 |
+
|
39 |
+
## 🛠️ Setup
|
40 |
+
|
41 |
+
### Environment Variables
|
42 |
+
|
43 |
+
You'll need to set up the following environment variables:
|
44 |
+
|
45 |
+
```bash
|
46 |
+
# Required - OpenAI API for research
|
47 |
+
OPENAI_API_KEY=your_openai_api_key_here
|
48 |
+
|
49 |
+
# Optional - SendGrid for email delivery
|
50 |
+
SENDGRID_API_KEY=your_sendgrid_api_key_here
|
51 |
+
SENDGRID_FROM_EMAIL=your_verified_sender_email@example.com
|
52 |
+
```
|
53 |
+
|
54 |
+
### For Hugging Face Spaces Deployment
|
55 |
+
|
56 |
+
1. **Fork this space** or create a new one
|
57 |
+
2. **Add your secrets** in the Space settings:
|
58 |
+
- `OPENAI_API_KEY`: Your OpenAI API key
|
59 |
+
- `SENDGRID_API_KEY`: Your SendGrid API key (optional)
|
60 |
+
- `SENDGRID_FROM_EMAIL`: Your verified sender email (optional)
|
61 |
+
3. **Deploy** - The space will automatically install dependencies and launch
|
62 |
+
|
63 |
+
### For Local Development
|
64 |
+
|
65 |
+
1. **Clone the repository**:
|
66 |
+
```bash
|
67 |
+
git clone <your-repo-url>
|
68 |
+
cd deep_research
|
69 |
+
```
|
70 |
+
|
71 |
+
2. **Install dependencies**:
|
72 |
+
```bash
|
73 |
+
pip install -r requirements.txt
|
74 |
+
```
|
75 |
+
|
76 |
+
3. **Set up environment variables**:
|
77 |
+
```bash
|
78 |
+
cp .env.example .env
|
79 |
+
# Edit .env with your API keys
|
80 |
+
```
|
81 |
+
|
82 |
+
4. **Run the application**:
|
83 |
+
```bash
|
84 |
+
python app.py
|
85 |
+
```
|
86 |
+
|
87 |
+
## 📊 Quality Assurance System
|
88 |
+
|
89 |
+
Our enhanced research system includes automatic quality evaluation:
|
90 |
+
|
91 |
+
### Evaluation Criteria
|
92 |
+
- **Completeness**: How thoroughly the query is addressed
|
93 |
+
- **Accuracy**: Factual correctness and source reliability
|
94 |
+
- **Clarity**: Writing quality and organization
|
95 |
+
- **Depth**: Analysis depth and insight quality
|
96 |
+
- **Relevance**: Content alignment with the original query
|
97 |
+
|
98 |
+
### Scoring Scale
|
99 |
+
- **9-10**: Excellent (no refinement needed)
|
100 |
+
- **7-8**: Good (minor improvements)
|
101 |
+
- **5-6**: Adequate (refinement recommended)
|
102 |
+
- **1-4**: Poor (automatic refinement triggered)
|
103 |
+
|
104 |
+
## 🎮 How to Use
|
105 |
+
|
106 |
+
1. **Enter Your Research Query**: Describe what you want to research
|
107 |
+
2. **Configure Email (Optional)**: Set up email delivery if desired
|
108 |
+
3. **Choose Research Mode**:
|
109 |
+
- Click "🚀 Start Research" for interactive clarification mode
|
110 |
+
- Use "🤖 Enhanced Research" for direct advanced research
|
111 |
+
- Use "⚡ Quick Research" for fast results
|
112 |
+
|
113 |
+
4. **Get Results**:
|
114 |
+
- View comprehensive research report
|
115 |
+
- Receive email delivery (if configured)
|
116 |
+
- Access detailed trace logs for transparency
|
117 |
+
|
118 |
+
## 🔧 Technical Architecture
|
119 |
+
|
120 |
+
Built with:
|
121 |
+
- **Frontend**: Gradio for interactive web interface
|
122 |
+
- **Backend**: OpenAI Agents framework for modular AI system
|
123 |
+
- **Quality Assurance**: Automated evaluation and optimization pipeline
|
124 |
+
- **Email**: SendGrid integration for report delivery
|
125 |
+
- **Tracing**: OpenAI trace integration for full transparency
|
126 |
+
|
127 |
+
### Agent-Based Architecture
|
128 |
+
|
129 |
+
The system uses specialized AI agents:
|
130 |
+
- **Research Manager**: Orchestrates the entire research process
|
131 |
+
- **Planner Agent**: Creates strategic search plans
|
132 |
+
- **Search Agent**: Performs web searches
|
133 |
+
- **Writer Agent**: Generates comprehensive reports
|
134 |
+
- **Evaluator Agent**: Assesses report quality
|
135 |
+
- **Optimizer Agent**: Improves reports when needed
|
136 |
+
- **Email Agent**: Handles report delivery
|
137 |
+
|
138 |
+
## 📝 Example Queries
|
139 |
+
|
140 |
+
Try these example research queries:
|
141 |
+
|
142 |
+
- "Latest developments in renewable energy storage technology"
|
143 |
+
- "Impact of AI on healthcare industry in 2024"
|
144 |
+
- "Sustainable urban planning strategies for climate change"
|
145 |
+
- "Cybersecurity trends and threats in financial services"
|
146 |
+
- "Electric vehicle market analysis and future projections"
|
147 |
+
|
148 |
+
## 🤝 Contributing
|
149 |
+
|
150 |
+
We welcome contributions! Areas for improvement:
|
151 |
+
- Additional research sources and tools
|
152 |
+
- Enhanced evaluation criteria
|
153 |
+
- New output formats
|
154 |
+
- UI/UX improvements
|
155 |
+
- Performance optimizations
|
156 |
+
|
157 |
+
## 📄 License
|
158 |
+
|
159 |
+
This project is licensed under the MIT License - see the LICENSE file for details.
|
160 |
+
|
161 |
+
## 🙋♀️ Support
|
162 |
+
|
163 |
+
- **Issues**: Report bugs or request features via GitHub Issues
|
164 |
+
- **Documentation**: Check out the enhanced README in the repository
|
165 |
+
- **Trace Logs**: Use the provided trace IDs to debug research processes
|
166 |
+
|
167 |
---
|
168 |
|
169 |
+
**Built with ❤️ using OpenAI Agents, Gradio, and modern AI research techniques.**
|
app.py
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Deep Research Assistant - Hugging Face Spaces Deployment
|
4 |
+
A comprehensive AI-powered research assistant with quality assurance.
|
5 |
+
"""
|
6 |
+
|
7 |
+
import os
|
8 |
+
import gradio as gr
|
9 |
+
from dotenv import load_dotenv
|
10 |
+
import spaces
|
11 |
+
|
12 |
+
# Load environment variables
|
13 |
+
load_dotenv(override=True)
|
14 |
+
|
15 |
+
# Import the main interface from deep_research.py
|
16 |
+
from deep_research import ui
|
17 |
+
|
18 |
+
if __name__ == "__main__":
|
19 |
+
# Launch the interface
|
20 |
+
# For Hugging Face Spaces, we don't need inbrowser=True and we want to set share=False
|
21 |
+
ui.launch(
|
22 |
+
server_name="0.0.0.0", # Required for Hugging Face Spaces
|
23 |
+
server_port=7860, # Standard port for Gradio on HF Spaces
|
24 |
+
share=False, # Don't create public links on HF Spaces
|
25 |
+
inbrowser=False # Don't try to open browser on server
|
26 |
+
)
|
clarifier_agent.py
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import List
|
2 |
+
from pydantic import BaseModel
|
3 |
+
from agents import Agent
|
4 |
+
|
5 |
+
class ClarificationData(BaseModel):
|
6 |
+
questions: List[str]
|
7 |
+
|
8 |
+
CLARIFY_INSTRUCTIONS = """
|
9 |
+
You are a Research Clarifier. Given a user's research query, generate exactly 3 clarifying questions
|
10 |
+
that will help focus and refine the research. These questions should help understand:
|
11 |
+
1. The specific aspect or angle they want to focus on
|
12 |
+
2. The depth or scope of research needed
|
13 |
+
3. The intended use or audience for the research
|
14 |
+
|
15 |
+
Return your response as JSON matching the ClarificationData model with exactly 3 questions.
|
16 |
+
"""
|
17 |
+
|
18 |
+
clarifier_agent = Agent(
|
19 |
+
name="ClarifierAgent",
|
20 |
+
instructions=CLARIFY_INSTRUCTIONS,
|
21 |
+
model="gpt-4o-mini",
|
22 |
+
output_type=ClarificationData,
|
23 |
+
)
|
deep_research.py
ADDED
@@ -0,0 +1,619 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
from dotenv import load_dotenv
|
3 |
+
from research_manager import ResearchManager, ResearchManagerAgent
|
4 |
+
from agents import Runner, trace, gen_trace_id
|
5 |
+
|
6 |
+
load_dotenv(override=True)
|
7 |
+
|
8 |
+
async def handle_query_submission(query: str, current_state: dict):
|
9 |
+
"""Handle initial query submission - generate clarifying questions with progress"""
|
10 |
+
if not query.strip():
|
11 |
+
return "Please enter a research query.", gr.update(visible=False), gr.update(visible=False), current_state
|
12 |
+
|
13 |
+
try:
|
14 |
+
# Show progress
|
15 |
+
progress_update = "🔄 **Generating clarifying questions...**\n\nPlease wait while our AI analyzes your query and creates focused questions to improve the research quality."
|
16 |
+
|
17 |
+
research_manager = ResearchManager()
|
18 |
+
result = await research_manager.run_with_clarification(query)
|
19 |
+
|
20 |
+
# Format questions for display
|
21 |
+
questions_text = "\n\n".join([f"**{i+1}.** {q}" for i, q in enumerate(result["questions"])])
|
22 |
+
display_text = f"**✅ Clarifying Questions Generated:**\n\n{questions_text}\n\n**Please answer these questions to help focus the research:**"
|
23 |
+
|
24 |
+
# Update state with query and questions
|
25 |
+
new_state = {
|
26 |
+
"query": query,
|
27 |
+
"questions": result["questions"],
|
28 |
+
"trace_id": result["trace_id"]
|
29 |
+
}
|
30 |
+
|
31 |
+
return display_text, gr.update(visible=True), gr.update(visible=True), new_state
|
32 |
+
|
33 |
+
except Exception as e:
|
34 |
+
return f"❌ Error generating clarifying questions: {str(e)}", gr.update(visible=False), gr.update(visible=False), current_state
|
35 |
+
|
36 |
+
async def handle_research_with_answers(answers: str, current_state: dict, email_address: str, send_email: bool):
|
37 |
+
"""Handle research execution with clarification answers with progress updates"""
|
38 |
+
if not current_state.get("query"):
|
39 |
+
return "Please start by entering a research query first.", current_state
|
40 |
+
|
41 |
+
if not answers.strip():
|
42 |
+
return "Please provide answers to the clarifying questions.", current_state
|
43 |
+
|
44 |
+
try:
|
45 |
+
# Show progress
|
46 |
+
progress_message = f"""🔄 **Research in Progress...**
|
47 |
+
|
48 |
+
**Original Query:** {current_state['query']}
|
49 |
+
|
50 |
+
**Status:** Processing your clarifications and starting comprehensive research...
|
51 |
+
|
52 |
+
⏳ This may take 1-2 minutes. We're:
|
53 |
+
1. Planning search strategy
|
54 |
+
2. Conducting multiple web searches
|
55 |
+
3. Writing initial report
|
56 |
+
4. Evaluating quality
|
57 |
+
5. Optimizing if needed
|
58 |
+
6. Preparing final delivery"""
|
59 |
+
|
60 |
+
# Use the enhanced manager with email settings
|
61 |
+
from research_manager import create_custom_research_agent
|
62 |
+
|
63 |
+
# Parse answers (one per line)
|
64 |
+
answer_list = [line.strip() for line in answers.split('\n') if line.strip()]
|
65 |
+
|
66 |
+
# Format the query with clarifications
|
67 |
+
clarified_query = f"""Original query: {current_state['query']}
|
68 |
+
|
69 |
+
Clarifications provided:
|
70 |
+
{chr(10).join([f"{i+1}. {answer}" for i, answer in enumerate(answer_list)])}
|
71 |
+
|
72 |
+
Please use these clarifications to focus and refine the research approach."""
|
73 |
+
|
74 |
+
# Create custom agent with email settings
|
75 |
+
custom_agent = create_custom_research_agent(
|
76 |
+
email_address=email_address if send_email else None,
|
77 |
+
send_email=send_email
|
78 |
+
)
|
79 |
+
|
80 |
+
# Run research with custom agent
|
81 |
+
trace_id = gen_trace_id()
|
82 |
+
with trace("Focused Research with Clarifications", trace_id=trace_id):
|
83 |
+
result = await Runner.run(
|
84 |
+
custom_agent,
|
85 |
+
f"Research Query: {clarified_query}"
|
86 |
+
)
|
87 |
+
|
88 |
+
email_status = ""
|
89 |
+
if send_email and email_address:
|
90 |
+
email_status = f"\n📧 **Email sent to:** {email_address}"
|
91 |
+
elif send_email and not email_address:
|
92 |
+
email_status = f"\n⚠️ **Email not sent:** No email address provided"
|
93 |
+
else:
|
94 |
+
email_status = f"\n📄 **Report generated:** Email sending disabled"
|
95 |
+
|
96 |
+
final_report = f"""**✅ Research Complete!**
|
97 |
+
|
98 |
+
**🔗 Trace ID:** {trace_id}
|
99 |
+
|
100 |
+
**Original Query:** {current_state['query']}
|
101 |
+
|
102 |
+
**📊 Enhanced Final Report:**
|
103 |
+
|
104 |
+
{result.final_output}
|
105 |
+
|
106 |
+
{email_status}
|
107 |
+
|
108 |
+
---
|
109 |
+
*Research completed using enhanced AI system with quality assurance and your clarifications.*"""
|
110 |
+
|
111 |
+
return final_report, current_state
|
112 |
+
|
113 |
+
except Exception as e:
|
114 |
+
return f"❌ Error during research: {str(e)}", current_state
|
115 |
+
|
116 |
+
async def run_direct_research(query: str, email_address: str = "", send_email: bool = False):
|
117 |
+
"""Run research directly without clarification using the new agent-based system"""
|
118 |
+
if not query.strip():
|
119 |
+
return "Please enter a research query."
|
120 |
+
|
121 |
+
try:
|
122 |
+
trace_id = gen_trace_id()
|
123 |
+
with trace("Enhanced Research Manager", trace_id=trace_id):
|
124 |
+
print(f"🔗 Starting enhanced research with trace: {trace_id}")
|
125 |
+
|
126 |
+
# Import the function here to avoid circular imports
|
127 |
+
from research_manager import create_custom_research_agent
|
128 |
+
|
129 |
+
# Create agent with email settings
|
130 |
+
custom_agent = create_custom_research_agent(
|
131 |
+
email_address=email_address if send_email else None,
|
132 |
+
send_email=send_email
|
133 |
+
)
|
134 |
+
|
135 |
+
# Use the custom agent
|
136 |
+
result = await Runner.run(
|
137 |
+
custom_agent,
|
138 |
+
f"Research Query: {query}"
|
139 |
+
)
|
140 |
+
|
141 |
+
email_status = ""
|
142 |
+
if send_email and email_address:
|
143 |
+
email_status = f"\n📧 **Email sent to:** {email_address}"
|
144 |
+
elif send_email and not email_address:
|
145 |
+
email_status = f"\n⚠️ **Email not sent:** No email address provided"
|
146 |
+
else:
|
147 |
+
email_status = f"\n📄 **Report generated:** Email sending disabled"
|
148 |
+
|
149 |
+
return f"""**✅ Research Complete!**
|
150 |
+
|
151 |
+
**🔗 Trace ID:** {trace_id}
|
152 |
+
**👀 View Detailed Trace:** https://platform.openai.com/traces/trace?trace_id={trace_id}
|
153 |
+
|
154 |
+
**📊 Enhanced Research Report with Quality Assurance:**
|
155 |
+
|
156 |
+
{result.final_output}
|
157 |
+
|
158 |
+
{email_status}
|
159 |
+
|
160 |
+
---
|
161 |
+
*🤖 This research was conducted using our enhanced agent-based system with automatic quality evaluation and optimization. Check the trace link above to see the full workflow including planning, searching, writing, evaluation, and optimization steps.*"""
|
162 |
+
|
163 |
+
except Exception as e:
|
164 |
+
import traceback
|
165 |
+
error_details = traceback.format_exc()
|
166 |
+
print(f"Error details: {error_details}")
|
167 |
+
return f"❌ Error during research: {str(e)}\n\nPlease try the Legacy Quick Research option if this persists."
|
168 |
+
|
169 |
+
async def run_legacy_research(query: str, email_address: str, send_email: bool):
|
170 |
+
"""Run research using the original ResearchManager class with email options"""
|
171 |
+
if not query.strip():
|
172 |
+
return "Please enter a research query."
|
173 |
+
|
174 |
+
try:
|
175 |
+
# Use the enhanced system but call it "legacy" for the user
|
176 |
+
trace_id = gen_trace_id()
|
177 |
+
with trace("Quick Research", trace_id=trace_id):
|
178 |
+
from research_manager import create_custom_research_agent
|
179 |
+
|
180 |
+
# Create agent with email settings
|
181 |
+
custom_agent = create_custom_research_agent(
|
182 |
+
email_address=email_address if send_email else None,
|
183 |
+
send_email=send_email
|
184 |
+
)
|
185 |
+
|
186 |
+
result = await Runner.run(
|
187 |
+
custom_agent,
|
188 |
+
f"Research Query: {query}"
|
189 |
+
)
|
190 |
+
|
191 |
+
email_status = ""
|
192 |
+
if send_email and email_address:
|
193 |
+
email_status = f"\n📧 **Email sent to:** {email_address}"
|
194 |
+
elif send_email and not email_address:
|
195 |
+
email_status = f"\n⚠️ **Email not sent:** No email address provided"
|
196 |
+
else:
|
197 |
+
email_status = f"\n📄 **Report generated:** Email sending disabled"
|
198 |
+
|
199 |
+
return f"""**✅ Quick Research Complete!**
|
200 |
+
|
201 |
+
**🔗 Trace ID:** {trace_id}
|
202 |
+
|
203 |
+
**📊 Research Report:**
|
204 |
+
|
205 |
+
{result.final_output}
|
206 |
+
|
207 |
+
{email_status}
|
208 |
+
|
209 |
+
---
|
210 |
+
*Quick research completed successfully.*"""
|
211 |
+
|
212 |
+
except Exception as e:
|
213 |
+
return f"❌ Error during research: {str(e)}"
|
214 |
+
|
215 |
+
async def run_enhanced_research_with_progress(query: str, email_address: str = "", send_email: bool = False):
|
216 |
+
"""Run enhanced research with real-time step-by-step progress updates"""
|
217 |
+
if not query.strip():
|
218 |
+
yield "Please enter a research query."
|
219 |
+
return
|
220 |
+
|
221 |
+
# Import the new progress function
|
222 |
+
from research_manager import run_research_with_progress
|
223 |
+
|
224 |
+
try:
|
225 |
+
# Collect all progress updates
|
226 |
+
progress_updates = []
|
227 |
+
async for update in run_research_with_progress(
|
228 |
+
query=query,
|
229 |
+
email_address=email_address if send_email else None,
|
230 |
+
send_email=send_email
|
231 |
+
):
|
232 |
+
progress_updates.append(update)
|
233 |
+
# Return current progress to update the UI
|
234 |
+
yield "\n\n".join(progress_updates)
|
235 |
+
|
236 |
+
except Exception as e:
|
237 |
+
import traceback
|
238 |
+
error_details = traceback.format_exc()
|
239 |
+
print(f"Error details: {error_details}")
|
240 |
+
yield f"❌ Error during research: {str(e)}\n\nPlease try a different approach if this persists."
|
241 |
+
|
242 |
+
async def run_clarified_research_with_progress(answers: str, current_state: dict, email_address: str, send_email: bool):
|
243 |
+
"""Handle research execution with clarification answers and real-time progress"""
|
244 |
+
if not current_state.get("query"):
|
245 |
+
yield "Please start by entering a research query first."
|
246 |
+
return
|
247 |
+
|
248 |
+
if not answers.strip():
|
249 |
+
yield "Please provide answers to the clarifying questions."
|
250 |
+
return
|
251 |
+
|
252 |
+
# Import the new progress function
|
253 |
+
from research_manager import run_research_with_progress
|
254 |
+
|
255 |
+
try:
|
256 |
+
# Parse answers (one per line)
|
257 |
+
answer_list = [line.strip() for line in answers.split('\n') if line.strip()]
|
258 |
+
|
259 |
+
# Format the query with clarifications
|
260 |
+
clarified_query = f"""Original query: {current_state['query']}
|
261 |
+
|
262 |
+
Clarifications provided:
|
263 |
+
{chr(10).join([f"{i+1}. {answer}" for i, answer in enumerate(answer_list)])}
|
264 |
+
|
265 |
+
Please use these clarifications to focus and refine the research approach."""
|
266 |
+
|
267 |
+
# Show initial setup
|
268 |
+
yield f"🚀 **Starting Focused Research with Clarifications**\n\n**Original Query:** {current_state['query']}\n\n**Your Clarifications:**\n{chr(10).join([f'• {answer}' for answer in answer_list if answer])}\n\n---\n\n"
|
269 |
+
|
270 |
+
# Collect all progress updates
|
271 |
+
progress_updates = [f"🚀 **Starting Focused Research with Clarifications**\n\n**Original Query:** {current_state['query']}\n\n**Your Clarifications:**\n{chr(10).join([f'• {answer}' for answer in answer_list if answer])}\n\n---\n\n"]
|
272 |
+
|
273 |
+
async for update in run_research_with_progress(
|
274 |
+
query=clarified_query,
|
275 |
+
email_address=email_address if send_email else None,
|
276 |
+
send_email=send_email
|
277 |
+
):
|
278 |
+
progress_updates.append(update)
|
279 |
+
# Return current progress to update the UI
|
280 |
+
yield "\n\n".join(progress_updates)
|
281 |
+
|
282 |
+
except Exception as e:
|
283 |
+
yield f"❌ Error during research: {str(e)}"
|
284 |
+
|
285 |
+
# Custom CSS for better readability and contrast
|
286 |
+
custom_css = """
|
287 |
+
/* Main container improvements */
|
288 |
+
.gradio-container {
|
289 |
+
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif !important;
|
290 |
+
}
|
291 |
+
|
292 |
+
/* Ensure good contrast for all text inputs */
|
293 |
+
.gradio-container input[type="text"],
|
294 |
+
.gradio-container textarea {
|
295 |
+
background-color: #4b5563 !important;
|
296 |
+
border: 2px solid #6b7280 !important;
|
297 |
+
border-radius: 8px !important;
|
298 |
+
padding: 12px !important;
|
299 |
+
font-size: 14px !important;
|
300 |
+
color: #f9fafb !important;
|
301 |
+
font-weight: 400 !important;
|
302 |
+
line-height: 1.5 !important;
|
303 |
+
transition: border-color 0.2s ease !important;
|
304 |
+
}
|
305 |
+
|
306 |
+
.gradio-container input[type="text"]:focus,
|
307 |
+
.gradio-container textarea:focus {
|
308 |
+
border-color: #60a5fa !important;
|
309 |
+
box-shadow: 0 0 0 3px rgba(96, 165, 250, 0.2) !important;
|
310 |
+
outline: none !important;
|
311 |
+
}
|
312 |
+
|
313 |
+
/* Placeholder styling for all inputs */
|
314 |
+
.gradio-container input[type="text"]::placeholder,
|
315 |
+
.gradio-container textarea::placeholder {
|
316 |
+
color: #9ca3af !important;
|
317 |
+
opacity: 0.8 !important;
|
318 |
+
font-style: italic !important;
|
319 |
+
}
|
320 |
+
|
321 |
+
/* Simple button styling with good contrast */
|
322 |
+
.gradio-container button {
|
323 |
+
border-radius: 8px !important;
|
324 |
+
font-weight: 500 !important;
|
325 |
+
font-size: 14px !important;
|
326 |
+
padding: 8px 16px !important;
|
327 |
+
border: 2px solid transparent !important;
|
328 |
+
transition: all 0.2s ease !important;
|
329 |
+
}
|
330 |
+
|
331 |
+
/* Primary buttons */
|
332 |
+
button[variant="primary"] {
|
333 |
+
background-color: #3b82f6 !important;
|
334 |
+
color: white !important;
|
335 |
+
border-color: #3b82f6 !important;
|
336 |
+
}
|
337 |
+
|
338 |
+
button[variant="primary"]:hover {
|
339 |
+
background-color: #2563eb !important;
|
340 |
+
border-color: #2563eb !important;
|
341 |
+
}
|
342 |
+
|
343 |
+
/* Secondary buttons */
|
344 |
+
button[variant="secondary"] {
|
345 |
+
background-color: #f8fafc !important;
|
346 |
+
color: #374151 !important;
|
347 |
+
border-color: #d1d5db !important;
|
348 |
+
}
|
349 |
+
|
350 |
+
button[variant="secondary"]:hover {
|
351 |
+
background-color: #f1f5f9 !important;
|
352 |
+
border-color: #9ca3af !important;
|
353 |
+
}
|
354 |
+
|
355 |
+
/* Simple section styling */
|
356 |
+
.clarification-section {
|
357 |
+
background-color: #374151 !important;
|
358 |
+
border: 2px solid #4b5563 !important;
|
359 |
+
border-radius: 12px !important;
|
360 |
+
padding: 20px !important;
|
361 |
+
margin: 16px 0 !important;
|
362 |
+
color: #ffffff !important;
|
363 |
+
}
|
364 |
+
|
365 |
+
.clarification-section * {
|
366 |
+
color: #ffffff !important;
|
367 |
+
}
|
368 |
+
|
369 |
+
.clarification-section h1,
|
370 |
+
.clarification-section h2,
|
371 |
+
.clarification-section h3 {
|
372 |
+
color: #ffffff !important;
|
373 |
+
font-weight: 600 !important;
|
374 |
+
}
|
375 |
+
|
376 |
+
/* Clean answer box */
|
377 |
+
.answer-textbox {
|
378 |
+
background-color: #4b5563 !important;
|
379 |
+
border: 2px solid #6b7280 !important;
|
380 |
+
border-radius: 8px !important;
|
381 |
+
padding: 12px !important;
|
382 |
+
color: #d1d5db !important;
|
383 |
+
line-height: 1.5 !important;
|
384 |
+
}
|
385 |
+
|
386 |
+
.answer-textbox:focus {
|
387 |
+
border-color: #60a5fa !important;
|
388 |
+
box-shadow: 0 0 0 3px rgba(96, 165, 250, 0.2) !important;
|
389 |
+
}
|
390 |
+
|
391 |
+
/* Target the actual textarea element inside answer-textbox */
|
392 |
+
.answer-textbox textarea {
|
393 |
+
background-color: #4b5563 !important;
|
394 |
+
color: #f9fafb !important;
|
395 |
+
border: 2px solid #6b7280 !important;
|
396 |
+
border-radius: 8px !important;
|
397 |
+
padding: 12px !important;
|
398 |
+
font-size: 14px !important;
|
399 |
+
font-weight: 400 !important;
|
400 |
+
line-height: 1.5 !important;
|
401 |
+
}
|
402 |
+
|
403 |
+
.answer-textbox textarea:focus {
|
404 |
+
border-color: #60a5fa !important;
|
405 |
+
box-shadow: 0 0 0 3px rgba(96, 165, 250, 0.2) !important;
|
406 |
+
}
|
407 |
+
|
408 |
+
/* Make sure placeholder text is visible on dark background */
|
409 |
+
.answer-textbox textarea::placeholder {
|
410 |
+
color: #9ca3af !important;
|
411 |
+
opacity: 0.8 !important;
|
412 |
+
font-style: italic !important;
|
413 |
+
}
|
414 |
+
|
415 |
+
/* Make all textareas have proper white text */
|
416 |
+
.gradio-container textarea {
|
417 |
+
color: #f9fafb !important;
|
418 |
+
}
|
419 |
+
|
420 |
+
.answer-textbox::placeholder {
|
421 |
+
color: #9ca3af !important;
|
422 |
+
opacity: 0.9 !important;
|
423 |
+
}
|
424 |
+
|
425 |
+
/* Results display with better contrast */
|
426 |
+
.results-display {
|
427 |
+
background-color: #374151 !important;
|
428 |
+
border: 2px solid #4b5563 !important;
|
429 |
+
border-radius: 8px !important;
|
430 |
+
padding: 16px !important;
|
431 |
+
margin: 12px 0 !important;
|
432 |
+
color: #ffffff !important;
|
433 |
+
line-height: 1.6 !important;
|
434 |
+
}
|
435 |
+
|
436 |
+
/* Make sure markdown in results display also has white text */
|
437 |
+
.results-display * {
|
438 |
+
color: #ffffff !important;
|
439 |
+
}
|
440 |
+
|
441 |
+
/* Style links in results display for visibility */
|
442 |
+
.results-display a {
|
443 |
+
color: #60a5fa !important;
|
444 |
+
text-decoration: underline !important;
|
445 |
+
}
|
446 |
+
|
447 |
+
.results-display a:hover {
|
448 |
+
color: #93c5fd !important;
|
449 |
+
}
|
450 |
+
|
451 |
+
/* Accordion improvements */
|
452 |
+
.gradio-accordion {
|
453 |
+
border: 1px solid #e5e7eb !important;
|
454 |
+
border-radius: 8px !important;
|
455 |
+
margin: 8px 0 !important;
|
456 |
+
}
|
457 |
+
|
458 |
+
/* Status indicators with good contrast */
|
459 |
+
.status-success {
|
460 |
+
color: #059669 !important;
|
461 |
+
font-weight: 500 !important;
|
462 |
+
}
|
463 |
+
|
464 |
+
.status-info {
|
465 |
+
color: #0369a1 !important;
|
466 |
+
font-weight: 500 !important;
|
467 |
+
}
|
468 |
+
|
469 |
+
.status-warning {
|
470 |
+
color: #d97706 !important;
|
471 |
+
font-weight: 500 !important;
|
472 |
+
}
|
473 |
+
|
474 |
+
/* Simple headers */
|
475 |
+
h1, h2, h3 {
|
476 |
+
color: #ffffff !important;
|
477 |
+
font-weight: 600 !important;
|
478 |
+
}
|
479 |
+
|
480 |
+
/* Remove unnecessary gradients and shadows for simplicity */
|
481 |
+
* {
|
482 |
+
box-shadow: none !important;
|
483 |
+
}
|
484 |
+
|
485 |
+
/* Keep only essential shadows for depth */
|
486 |
+
.gradio-container button,
|
487 |
+
.gradio-container input,
|
488 |
+
.gradio-container textarea {
|
489 |
+
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1) !important;
|
490 |
+
}
|
491 |
+
|
492 |
+
.gradio-container button:hover {
|
493 |
+
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.15) !important;
|
494 |
+
}
|
495 |
+
"""
|
496 |
+
|
497 |
+
with gr.Blocks(theme=gr.themes.Default(primary_hue="blue"), css=custom_css) as ui:
|
498 |
+
gr.Markdown("# 🔍 Deep Research Assistant")
|
499 |
+
gr.Markdown("**Ask a research question and get comprehensive, AI-powered analysis with quality assurance.**")
|
500 |
+
|
501 |
+
# State to track the conversation
|
502 |
+
state = gr.State({})
|
503 |
+
|
504 |
+
# Main Research Configuration Block
|
505 |
+
with gr.Column():
|
506 |
+
query_textbox = gr.Textbox(
|
507 |
+
label="Research Query",
|
508 |
+
placeholder="What would you like to research? (e.g., 'Latest developments in renewable energy')",
|
509 |
+
lines=2,
|
510 |
+
elem_classes=["main-input"]
|
511 |
+
)
|
512 |
+
|
513 |
+
# Email Configuration (part of main block)
|
514 |
+
with gr.Accordion("📧 Email Configuration (Optional)", open=False):
|
515 |
+
gr.Markdown("**Configure email delivery for your research reports**")
|
516 |
+
|
517 |
+
with gr.Row():
|
518 |
+
with gr.Column(scale=3):
|
519 |
+
email_textbox = gr.Textbox(
|
520 |
+
label="Email Address",
|
521 |
+
placeholder="your.email@example.com",
|
522 |
+
lines=1
|
523 |
+
)
|
524 |
+
with gr.Column(scale=1):
|
525 |
+
send_email_checkbox = gr.Checkbox(
|
526 |
+
label="Send Email",
|
527 |
+
value=False,
|
528 |
+
info="Check to receive the report via email"
|
529 |
+
)
|
530 |
+
|
531 |
+
gr.Markdown("*This email setting will be used for any research option you choose below.*")
|
532 |
+
|
533 |
+
# Start Research Button (below the main configuration)
|
534 |
+
submit_button = gr.Button("🚀 Start Research", variant="primary", size="lg")
|
535 |
+
|
536 |
+
# Output area for questions and results
|
537 |
+
output_area = gr.Markdown(
|
538 |
+
label="Research Progress",
|
539 |
+
elem_classes=["results-display"],
|
540 |
+
value="👋 Enter your research query above and configure email settings if desired, then click Start Research!"
|
541 |
+
)
|
542 |
+
|
543 |
+
# Clarification answers section (initially hidden)
|
544 |
+
with gr.Column(visible=False, elem_classes=["clarification-section"]) as clarification_row:
|
545 |
+
gr.Markdown("### 💭 Help us focus your research")
|
546 |
+
gr.Markdown("Please answer these questions to get more targeted results:")
|
547 |
+
|
548 |
+
answers_textbox = gr.Textbox(
|
549 |
+
label="Your Answers",
|
550 |
+
placeholder="Answer each question on a separate line...\n\nExample:\n1. I'm interested in solar and wind technologies\n2. I need technical details and market analysis\n3. This is for a business presentation",
|
551 |
+
lines=6,
|
552 |
+
elem_classes=["answer-textbox"],
|
553 |
+
show_label=True
|
554 |
+
)
|
555 |
+
|
556 |
+
research_button = gr.Button(
|
557 |
+
"🔍 Run Focused Research",
|
558 |
+
variant="primary",
|
559 |
+
visible=False,
|
560 |
+
size="lg"
|
561 |
+
)
|
562 |
+
|
563 |
+
# Research options
|
564 |
+
with gr.Accordion("🤖 Enhanced Research (Recommended)", open=False):
|
565 |
+
gr.Markdown("""
|
566 |
+
**New AI-powered research system featuring:**
|
567 |
+
|
568 |
+
✅ **Quality Evaluation** - Each report is automatically assessed
|
569 |
+
✅ **Smart Optimization** - Reports are improved if needed
|
570 |
+
✅ **Comprehensive Analysis** - Multiple search strategies
|
571 |
+
|
572 |
+
*Delivers higher quality research through AI quality assurance.*
|
573 |
+
""")
|
574 |
+
enhanced_button = gr.Button("🤖 Enhanced Research", variant="primary")
|
575 |
+
|
576 |
+
with gr.Accordion("⚡ Quick Research (Legacy)", open=False):
|
577 |
+
gr.Markdown("*Faster research using the original system - good for quick queries.*")
|
578 |
+
direct_button = gr.Button("⚡ Quick Research", variant="secondary")
|
579 |
+
|
580 |
+
# Event handlers
|
581 |
+
submit_button.click(
|
582 |
+
fn=handle_query_submission,
|
583 |
+
inputs=[query_textbox, state],
|
584 |
+
outputs=[output_area, clarification_row, research_button, state]
|
585 |
+
)
|
586 |
+
|
587 |
+
query_textbox.submit(
|
588 |
+
fn=handle_query_submission,
|
589 |
+
inputs=[query_textbox, state],
|
590 |
+
outputs=[output_area, clarification_row, research_button, state]
|
591 |
+
)
|
592 |
+
|
593 |
+
research_button.click(
|
594 |
+
fn=run_clarified_research_with_progress,
|
595 |
+
inputs=[answers_textbox, state, email_textbox, send_email_checkbox],
|
596 |
+
outputs=[output_area]
|
597 |
+
)
|
598 |
+
|
599 |
+
answers_textbox.submit(
|
600 |
+
fn=run_clarified_research_with_progress,
|
601 |
+
inputs=[answers_textbox, state, email_textbox, send_email_checkbox],
|
602 |
+
outputs=[output_area]
|
603 |
+
)
|
604 |
+
|
605 |
+
enhanced_button.click(
|
606 |
+
fn=run_enhanced_research_with_progress,
|
607 |
+
inputs=[query_textbox, email_textbox, send_email_checkbox],
|
608 |
+
outputs=[output_area]
|
609 |
+
)
|
610 |
+
|
611 |
+
direct_button.click(
|
612 |
+
fn=run_legacy_research,
|
613 |
+
inputs=[query_textbox, email_textbox, send_email_checkbox],
|
614 |
+
outputs=[output_area]
|
615 |
+
)
|
616 |
+
|
617 |
+
if __name__ == "__main__":
|
618 |
+
ui.launch(inbrowser=True)
|
619 |
+
|
email_agent.py
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from typing import Dict
|
3 |
+
|
4 |
+
import sendgrid
|
5 |
+
from sendgrid.helpers.mail import Email, Mail, Content, To
|
6 |
+
from agents import Agent, function_tool
|
7 |
+
|
8 |
+
@function_tool
|
9 |
+
def send_email(subject: str, html_body: str, recipient_email: str = "mallofrench05@gmail.com") -> Dict[str, str]:
|
10 |
+
""" Send an email with the given subject and HTML body to the specified recipient """
|
11 |
+
try:
|
12 |
+
sg = sendgrid.SendGridAPIClient(api_key=os.environ.get('SENDGRID_API_KEY'))
|
13 |
+
from_email = Email("mantomarchi300@outlook.com") # put your verified sender here
|
14 |
+
to_email = To(recipient_email)
|
15 |
+
content = Content("text/html", html_body)
|
16 |
+
mail = Mail(from_email, to_email, subject, content).get()
|
17 |
+
response = sg.client.mail.send.post(request_body=mail)
|
18 |
+
print(f"Email response: {response.status_code}")
|
19 |
+
|
20 |
+
if response.status_code == 202:
|
21 |
+
return {"status": f"Email sent successfully to {recipient_email}"}
|
22 |
+
else:
|
23 |
+
return {"status": f"Email sending failed with status {response.status_code}"}
|
24 |
+
except Exception as e:
|
25 |
+
print(f"Email sending error: {e}")
|
26 |
+
return {"status": f"Email sending failed: {str(e)}"}
|
27 |
+
|
28 |
+
INSTRUCTIONS = """You are able to send a nicely formatted HTML email based on a detailed report.
|
29 |
+
You will be provided with a detailed report. You should use your tool to send one email, providing the
|
30 |
+
report converted into clean, well presented HTML with an appropriate subject line."""
|
31 |
+
|
32 |
+
email_agent = Agent(
|
33 |
+
name="Email agent",
|
34 |
+
instructions=INSTRUCTIONS,
|
35 |
+
tools=[send_email],
|
36 |
+
model="gpt-4o-mini",
|
37 |
+
)
|
env_example.txt
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Copy this file to .env and fill in your actual values
|
2 |
+
|
3 |
+
# OpenAI API Configuration (Required)
|
4 |
+
OPENAI_API_KEY=your_openai_api_key_here
|
5 |
+
|
6 |
+
# SendGrid Email Configuration (Optional)
|
7 |
+
SENDGRID_API_KEY=your_sendgrid_api_key_here
|
8 |
+
SENDGRID_FROM_EMAIL=your_verified_sender_email@example.com
|
9 |
+
|
10 |
+
# OpenAI Organization (Optional)
|
11 |
+
OPENAI_ORG_ID=your_openai_org_id_here
|
12 |
+
|
13 |
+
# Environment Settings
|
14 |
+
ENVIRONMENT=production
|
evaluator_agent.py
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from pydantic import BaseModel
|
2 |
+
from typing import List
|
3 |
+
from agents import Agent
|
4 |
+
|
5 |
+
class EvaluationResult(BaseModel):
|
6 |
+
overall_score: int
|
7 |
+
"""Overall quality score from 1-10"""
|
8 |
+
|
9 |
+
strengths: List[str]
|
10 |
+
"""List of report strengths"""
|
11 |
+
|
12 |
+
weaknesses: List[str]
|
13 |
+
"""List of areas needing improvement"""
|
14 |
+
|
15 |
+
suggestions: List[str]
|
16 |
+
"""Specific suggestions for improvement"""
|
17 |
+
|
18 |
+
needs_refinement: bool
|
19 |
+
"""Whether the report needs to be refined"""
|
20 |
+
|
21 |
+
refined_requirements: str
|
22 |
+
"""If refinement needed, what specific requirements should guide it"""
|
23 |
+
|
24 |
+
EVALUATION_INSTRUCTIONS = """
|
25 |
+
You are a Research Quality Evaluator. Your job is to assess the quality of research reports and determine if they need refinement.
|
26 |
+
|
27 |
+
Evaluate reports based on:
|
28 |
+
1. **Completeness**: Does it thoroughly address the original query?
|
29 |
+
2. **Accuracy**: Are the facts presented accurate and well-sourced?
|
30 |
+
3. **Sources & Citations**: Does it include proper source links and references? Is there a "Sources and References" section?
|
31 |
+
4. **Clarity**: Is the writing clear and well-structured?
|
32 |
+
5. **Depth**: Does it provide sufficient depth and analysis?
|
33 |
+
6. **Relevance**: Is all content relevant to the query?
|
34 |
+
|
35 |
+
Scoring scale:
|
36 |
+
- 9-10: Excellent, no refinement needed
|
37 |
+
- 7-8: Good, minor improvements could help
|
38 |
+
- 5-6: Adequate, would benefit from refinement
|
39 |
+
- 1-4: Poor, definitely needs refinement
|
40 |
+
|
41 |
+
CRITICAL: A report without proper source citations should not score above 6, regardless of other qualities.
|
42 |
+
|
43 |
+
If needs_refinement is True, provide specific, actionable requirements for improvement.
|
44 |
+
"""
|
45 |
+
|
46 |
+
evaluator_agent = Agent(
|
47 |
+
name="Research Evaluator",
|
48 |
+
instructions=EVALUATION_INSTRUCTIONS,
|
49 |
+
model="gpt-4o-mini",
|
50 |
+
output_type=EvaluationResult,
|
51 |
+
)
|
52 |
+
|
53 |
+
|
54 |
+
class OptimizedReport(BaseModel):
|
55 |
+
improved_markdown_report: str
|
56 |
+
"""The refined and improved version of the report"""
|
57 |
+
|
58 |
+
optimization_notes: str
|
59 |
+
"""Notes on what was improved and why"""
|
60 |
+
|
61 |
+
OPTIMIZER_INSTRUCTIONS = """
|
62 |
+
You are a Research Report Optimizer. You receive:
|
63 |
+
1. An original research report
|
64 |
+
2. Evaluation feedback with specific improvement suggestions
|
65 |
+
3. The original query for context
|
66 |
+
|
67 |
+
Your job is to create an improved version that addresses all the feedback while maintaining the factual content.
|
68 |
+
|
69 |
+
Focus on:
|
70 |
+
- Improving structure and flow
|
71 |
+
- Adding missing analysis or details
|
72 |
+
- Clarifying confusing sections
|
73 |
+
- Ensuring complete coverage of the query
|
74 |
+
- Enhancing readability and presentation
|
75 |
+
|
76 |
+
Keep all factual content accurate - only improve presentation, structure, and completeness.
|
77 |
+
"""
|
78 |
+
|
79 |
+
optimizer_agent = Agent(
|
80 |
+
name="Research Optimizer",
|
81 |
+
instructions=OPTIMIZER_INSTRUCTIONS,
|
82 |
+
model="gpt-4o-mini",
|
83 |
+
output_type=OptimizedReport,
|
84 |
+
)
|
metadata.json
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"title": "Deep Research Assistant",
|
3 |
+
"emoji": "🔍",
|
4 |
+
"colorFrom": "blue",
|
5 |
+
"colorTo": "purple",
|
6 |
+
"sdk": "gradio",
|
7 |
+
"sdk_version": "5.22.0",
|
8 |
+
"app_file": "app.py",
|
9 |
+
"pinned": false,
|
10 |
+
"license": "mit",
|
11 |
+
"short_description": "AI-powered research assistant with quality assurance and email delivery",
|
12 |
+
"tags": [
|
13 |
+
"research",
|
14 |
+
"ai",
|
15 |
+
"gradio",
|
16 |
+
"openai",
|
17 |
+
"quality-assurance",
|
18 |
+
"email",
|
19 |
+
"web-search"
|
20 |
+
],
|
21 |
+
"hardware": "zero-a10g"
|
22 |
+
}
|
planner_agent.py
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from pydantic import BaseModel
|
2 |
+
from agents import Agent
|
3 |
+
|
4 |
+
HOW_MANY_SEARCHES = 3
|
5 |
+
|
6 |
+
INSTRUCTIONS = f"You are a helpful research assistant. Given a query, come up with a set of web searches \
|
7 |
+
to perform to best answer the query. Output {HOW_MANY_SEARCHES} terms to query for."
|
8 |
+
|
9 |
+
|
10 |
+
class WebSearchItem(BaseModel):
|
11 |
+
reason: str
|
12 |
+
"Your reasoning for why this search is important to the query."
|
13 |
+
|
14 |
+
query: str
|
15 |
+
"The search term to use for the web search."
|
16 |
+
|
17 |
+
|
18 |
+
class WebSearchPlan(BaseModel):
|
19 |
+
searches: list[WebSearchItem]
|
20 |
+
"""A list of web searches to perform to best answer the query."""
|
21 |
+
|
22 |
+
|
23 |
+
planner_agent = Agent(
|
24 |
+
name="PlannerAgent",
|
25 |
+
instructions=INSTRUCTIONS,
|
26 |
+
model="gpt-4o-mini",
|
27 |
+
output_type=WebSearchPlan,
|
28 |
+
)
|
requirements.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
gradio>=5.22.0
|
2 |
+
openai>=1.68.2
|
3 |
+
openai-agents>=0.0.6
|
4 |
+
python-dotenv>=1.0.1
|
5 |
+
sendgrid>=6.11.0
|
6 |
+
requests>=2.32.3
|
7 |
+
bs4>=0.0.2
|
8 |
+
httpx>=0.28.1
|
9 |
+
pydantic>=2.0.0
|
10 |
+
typing-extensions>=4.0.0
|
11 |
+
spaces>=0.16.0
|
research_manager.py
ADDED
@@ -0,0 +1,506 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from agents import Runner, trace, gen_trace_id, Agent, function_tool
|
2 |
+
from search_agent import search_agent
|
3 |
+
from planner_agent import planner_agent, WebSearchItem, WebSearchPlan
|
4 |
+
from writer_agent import writer_agent, ReportData
|
5 |
+
from email_agent import email_agent
|
6 |
+
from clarifier_agent import clarifier_agent, ClarificationData
|
7 |
+
from evaluator_agent import evaluator_agent, optimizer_agent, EvaluationResult, OptimizedReport
|
8 |
+
import asyncio
|
9 |
+
from typing import Dict, Any, AsyncGenerator
|
10 |
+
|
11 |
+
# Legacy ResearchManager class for backward compatibility
|
12 |
+
class ResearchManager:
|
13 |
+
|
14 |
+
async def run_with_clarification(self, query: str):
|
15 |
+
""" Run the clarification step and return clarifying questions """
|
16 |
+
trace_id = gen_trace_id()
|
17 |
+
with trace("Clarification trace", trace_id=trace_id):
|
18 |
+
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}")
|
19 |
+
print("Generating clarifying questions...")
|
20 |
+
|
21 |
+
result = await Runner.run(
|
22 |
+
clarifier_agent,
|
23 |
+
f"Query: {query}",
|
24 |
+
)
|
25 |
+
|
26 |
+
clarification_data = result.final_output_as(ClarificationData)
|
27 |
+
print(f"Generated {len(clarification_data.questions)} clarifying questions")
|
28 |
+
|
29 |
+
return {
|
30 |
+
"questions": clarification_data.questions,
|
31 |
+
"trace_id": trace_id
|
32 |
+
}
|
33 |
+
|
34 |
+
async def run_research_with_answers(self, query: str, answers: list[str]):
|
35 |
+
""" Run the full research process with clarification answers """
|
36 |
+
trace_id = gen_trace_id()
|
37 |
+
with trace("Research with clarification trace", trace_id=trace_id):
|
38 |
+
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}")
|
39 |
+
print("Starting research with clarifications...")
|
40 |
+
|
41 |
+
# Use the new manager agent instead
|
42 |
+
clarified_query = self._format_clarified_query(query, answers)
|
43 |
+
|
44 |
+
result = await Runner.run(
|
45 |
+
ResearchManagerAgent,
|
46 |
+
f"Research Query: {clarified_query}",
|
47 |
+
)
|
48 |
+
|
49 |
+
return {
|
50 |
+
"report": result.final_output,
|
51 |
+
"trace_id": trace_id
|
52 |
+
}
|
53 |
+
|
54 |
+
def _format_clarified_query(self, original_query: str, answers: list[str]) -> str:
|
55 |
+
""" Format the original query with clarification answers """
|
56 |
+
clarifications = []
|
57 |
+
for i, answer in enumerate(answers, 1):
|
58 |
+
if answer.strip():
|
59 |
+
clarifications.append(f"{i}. {answer.strip()}")
|
60 |
+
|
61 |
+
if clarifications:
|
62 |
+
clarified_query = f"""Original query: {original_query}
|
63 |
+
|
64 |
+
Clarifications provided:
|
65 |
+
{chr(10).join(clarifications)}
|
66 |
+
|
67 |
+
Please use these clarifications to focus and refine the research approach."""
|
68 |
+
else:
|
69 |
+
clarified_query = original_query
|
70 |
+
|
71 |
+
return clarified_query
|
72 |
+
|
73 |
+
async def run(self, query: str):
|
74 |
+
""" Run the deep research process, yielding the status updates and the final report"""
|
75 |
+
trace_id = gen_trace_id()
|
76 |
+
with trace("Research trace", trace_id=trace_id):
|
77 |
+
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}")
|
78 |
+
yield f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}"
|
79 |
+
print("Starting research...")
|
80 |
+
|
81 |
+
# Use the new manager agent
|
82 |
+
result = await Runner.run(ResearchManagerAgent, f"Research Query: {query}")
|
83 |
+
yield "Research complete"
|
84 |
+
yield result.final_output
|
85 |
+
|
86 |
+
# Function tools for the manager agent to orchestrate the research process
|
87 |
+
@function_tool
|
88 |
+
async def plan_research(query: str) -> Dict[str, Any]:
|
89 |
+
""" Plan the research searches for a given query """
|
90 |
+
print("Planning searches...")
|
91 |
+
result = await Runner.run(planner_agent, f"Query: {query}")
|
92 |
+
search_plan = result.final_output_as(WebSearchPlan)
|
93 |
+
print(f"Will perform {len(search_plan.searches)} searches")
|
94 |
+
return {
|
95 |
+
"searches": [{"query": item.query, "reason": item.reason} for item in search_plan.searches],
|
96 |
+
"plan_ready": True
|
97 |
+
}
|
98 |
+
|
99 |
+
@function_tool
|
100 |
+
async def perform_search(search_query: str, reason: str) -> str:
|
101 |
+
""" Perform a single web search and return summarized results """
|
102 |
+
print(f"Searching for: {search_query}")
|
103 |
+
input_text = f"Search term: {search_query}\nReason for searching: {reason}"
|
104 |
+
try:
|
105 |
+
result = await Runner.run(search_agent, input_text)
|
106 |
+
return str(result.final_output)
|
107 |
+
except Exception as e:
|
108 |
+
print(f"Search failed for '{search_query}': {e}")
|
109 |
+
return f"Search failed for '{search_query}': {str(e)}"
|
110 |
+
|
111 |
+
@function_tool
|
112 |
+
async def write_initial_report(query: str, search_results: str) -> Dict[str, Any]:
|
113 |
+
""" Generate an initial research report from search results """
|
114 |
+
print("Writing initial report...")
|
115 |
+
input_text = f"Original query: {query}\nSummarized search results: {search_results}"
|
116 |
+
result = await Runner.run(writer_agent, input_text)
|
117 |
+
report_data = result.final_output_as(ReportData)
|
118 |
+
print("Initial report completed")
|
119 |
+
return {
|
120 |
+
"markdown_report": report_data.markdown_report,
|
121 |
+
"short_summary": report_data.short_summary,
|
122 |
+
"follow_up_questions": report_data.follow_up_questions
|
123 |
+
}
|
124 |
+
|
125 |
+
@function_tool
|
126 |
+
async def evaluate_report(query: str, report: str) -> Dict[str, Any]:
|
127 |
+
""" Evaluate the quality of a research report """
|
128 |
+
print("Evaluating report quality...")
|
129 |
+
input_text = f"Original Query: {query}\n\nReport to Evaluate:\n{report}"
|
130 |
+
result = await Runner.run(evaluator_agent, input_text)
|
131 |
+
evaluation = result.final_output_as(EvaluationResult)
|
132 |
+
print(f"Evaluation complete - Score: {evaluation.overall_score}/10, Needs refinement: {evaluation.needs_refinement}")
|
133 |
+
return {
|
134 |
+
"score": evaluation.overall_score,
|
135 |
+
"strengths": evaluation.strengths,
|
136 |
+
"weaknesses": evaluation.weaknesses,
|
137 |
+
"suggestions": evaluation.suggestions,
|
138 |
+
"needs_refinement": evaluation.needs_refinement,
|
139 |
+
"refinement_requirements": evaluation.refined_requirements
|
140 |
+
}
|
141 |
+
|
142 |
+
@function_tool
|
143 |
+
async def optimize_report(query: str, original_report: str, evaluation_feedback: str) -> str:
|
144 |
+
""" Optimize and improve a research report based on evaluation feedback """
|
145 |
+
print("Optimizing report...")
|
146 |
+
input_text = f"""Original Query: {query}
|
147 |
+
|
148 |
+
Original Report:
|
149 |
+
{original_report}
|
150 |
+
|
151 |
+
Evaluation Feedback:
|
152 |
+
{evaluation_feedback}
|
153 |
+
|
154 |
+
Please improve the report based on this feedback."""
|
155 |
+
|
156 |
+
result = await Runner.run(optimizer_agent, input_text)
|
157 |
+
optimized = result.final_output_as(OptimizedReport)
|
158 |
+
print("Report optimization complete")
|
159 |
+
return optimized.improved_markdown_report
|
160 |
+
|
161 |
+
# Regular function that can be called directly
|
162 |
+
async def _send_report_email_to_address(report: str, recipient_email: str) -> Dict[str, str]:
|
163 |
+
""" Send the final research report via email to a specific address """
|
164 |
+
import os
|
165 |
+
import sendgrid
|
166 |
+
from sendgrid.helpers.mail import Email, Mail, Content, To
|
167 |
+
|
168 |
+
print(f"Sending email to: {recipient_email}")
|
169 |
+
|
170 |
+
try:
|
171 |
+
sg = sendgrid.SendGridAPIClient(api_key=os.environ.get('SENDGRID_API_KEY'))
|
172 |
+
from_email = Email("mantomarchi300@outlook.com") # Verified sender
|
173 |
+
to_email = To(recipient_email) # User-provided email
|
174 |
+
|
175 |
+
# Create a nice subject line
|
176 |
+
subject = "🔍 Your Research Report is Ready"
|
177 |
+
|
178 |
+
# Convert markdown to HTML for better email formatting
|
179 |
+
import re
|
180 |
+
|
181 |
+
# Basic markdown to HTML conversion
|
182 |
+
html_report = report
|
183 |
+
|
184 |
+
# Convert markdown links to HTML links with styling
|
185 |
+
html_report = re.sub(r'\[([^\]]+)\]\(([^)]+)\)', r'<a href="\2" style="color: #2563eb; text-decoration: none; border-bottom: 1px solid #2563eb;" target="_blank">\1</a>', html_report)
|
186 |
+
|
187 |
+
# Convert headers
|
188 |
+
html_report = re.sub(r'^### (.*$)', r'<h3 style="color: #2563eb; margin-top: 25px; margin-bottom: 10px;">\1</h3>', html_report, flags=re.MULTILINE)
|
189 |
+
html_report = re.sub(r'^## (.*$)', r'<h2 style="color: #1d4ed8; margin-top: 30px; margin-bottom: 15px;">\1</h2>', html_report, flags=re.MULTILINE)
|
190 |
+
html_report = re.sub(r'^# (.*$)', r'<h1 style="color: #1e40af; margin-top: 35px; margin-bottom: 20px;">\1</h1>', html_report, flags=re.MULTILINE)
|
191 |
+
|
192 |
+
# Convert bold text
|
193 |
+
html_report = re.sub(r'\*\*(.*?)\*\*', r'<strong style="color: #374151;">\1</strong>', html_report)
|
194 |
+
|
195 |
+
# Convert numbered lists (for sources)
|
196 |
+
html_report = re.sub(r'^(\d+\.\s)(.*$)', r'<li style="margin-bottom: 8px; list-style-type: decimal;">\2</li>', html_report, flags=re.MULTILINE)
|
197 |
+
|
198 |
+
# Convert bullet points
|
199 |
+
html_report = re.sub(r'^- (.*$)', r'<li style="margin-bottom: 8px;">\1</li>', html_report, flags=re.MULTILINE)
|
200 |
+
|
201 |
+
# Wrap consecutive list items in ul/ol tags
|
202 |
+
html_report = re.sub(r'(<li style="margin-bottom: 8px; list-style-type: decimal;">.*?</li>)', r'<ol style="margin: 15px 0; padding-left: 25px;">\1</ol>', html_report, flags=re.DOTALL)
|
203 |
+
html_report = re.sub(r'(<li style="margin-bottom: 8px;">.*?</li>)', r'<ul style="margin: 15px 0; padding-left: 25px;">\1</ul>', html_report, flags=re.DOTALL)
|
204 |
+
|
205 |
+
# Convert line breaks
|
206 |
+
html_report = html_report.replace('\n\n', '</p><p style="margin-bottom: 15px; line-height: 1.6;">')
|
207 |
+
html_report = '<p style="margin-bottom: 15px; line-height: 1.6;">' + html_report + '</p>'
|
208 |
+
|
209 |
+
html_content = f"""
|
210 |
+
<!DOCTYPE html>
|
211 |
+
<html lang="en">
|
212 |
+
<head>
|
213 |
+
<meta charset="UTF-8">
|
214 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
215 |
+
<title>Your Research Report</title>
|
216 |
+
</head>
|
217 |
+
<body style="font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; line-height: 1.6; color: #374151; background-color: #f9fafb; margin: 0; padding: 20px;">
|
218 |
+
<div style="max-width: 800px; margin: 0 auto; background: #ffffff; border-radius: 12px; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.07); overflow: hidden;">
|
219 |
+
<!-- Header -->
|
220 |
+
<div style="background: linear-gradient(135deg, #2563eb 0%, #1d4ed8 100%); color: white; padding: 30px; text-align: center;">
|
221 |
+
<h1 style="margin: 0; font-size: 28px; font-weight: 600;">
|
222 |
+
🔍 Your Research Report
|
223 |
+
</h1>
|
224 |
+
<p style="margin: 10px 0 0 0; opacity: 0.9; font-size: 16px;">
|
225 |
+
Comprehensive AI-powered research analysis
|
226 |
+
</p>
|
227 |
+
</div>
|
228 |
+
|
229 |
+
<!-- Content -->
|
230 |
+
<div style="padding: 40px 30px;">
|
231 |
+
<div style="background: #f8fafc; padding: 30px; border-radius: 8px; border-left: 4px solid #2563eb; margin-bottom: 30px;">
|
232 |
+
{html_report}
|
233 |
+
</div>
|
234 |
+
</div>
|
235 |
+
|
236 |
+
<!-- Footer -->
|
237 |
+
<div style="background: #f8fafc; padding: 25px 30px; border-top: 1px solid #e5e7eb;">
|
238 |
+
<div style="text-align: center; color: #6b7280; font-size: 14px;">
|
239 |
+
<p style="margin: 0 0 10px 0;">
|
240 |
+
<strong>🤖 Generated by Deep Research Assistant</strong>
|
241 |
+
</p>
|
242 |
+
<p style="margin: 0;">
|
243 |
+
This report was created using advanced AI with multi-step quality assurance
|
244 |
+
</p>
|
245 |
+
<div style="margin-top: 15px; padding-top: 15px; border-top: 1px solid #d1d5db;">
|
246 |
+
<p style="margin: 0; font-size: 12px; color: #9ca3af;">
|
247 |
+
Thank you for using our research service • Generated on {__import__('datetime').datetime.now().strftime('%B %d, %Y at %I:%M %p')}
|
248 |
+
</p>
|
249 |
+
</div>
|
250 |
+
</div>
|
251 |
+
</div>
|
252 |
+
</div>
|
253 |
+
</body>
|
254 |
+
</html>
|
255 |
+
"""
|
256 |
+
|
257 |
+
content = Content("text/html", html_content)
|
258 |
+
mail = Mail(from_email, to_email, subject, content).get()
|
259 |
+
response = sg.client.mail.send.post(request_body=mail)
|
260 |
+
|
261 |
+
print(f"Email response: {response.status_code}")
|
262 |
+
if response.status_code == 202:
|
263 |
+
return {"status": f"Email sent successfully to {recipient_email}"}
|
264 |
+
else:
|
265 |
+
return {"status": f"Email sending failed with status {response.status_code}"}
|
266 |
+
|
267 |
+
except Exception as e:
|
268 |
+
print(f"Email sending error: {e}")
|
269 |
+
return {"status": f"Email sending failed: {str(e)}"}
|
270 |
+
|
271 |
+
@function_tool
|
272 |
+
async def send_report_email_to_address(report: str, recipient_email: str) -> Dict[str, str]:
|
273 |
+
""" Send the final research report via email to a specific address """
|
274 |
+
return await _send_report_email_to_address(report, recipient_email)
|
275 |
+
|
276 |
+
@function_tool
|
277 |
+
async def send_report_email(report: str) -> Dict[str, str]:
|
278 |
+
""" Send the final research report via email (legacy function - uses hardcoded email) """
|
279 |
+
print("Sending email to default address...")
|
280 |
+
result = await Runner.run(email_agent, report)
|
281 |
+
print("Email sent to default address")
|
282 |
+
return {"status": "Email sent successfully to default address"}
|
283 |
+
|
284 |
+
# Manager Agent Instructions
|
285 |
+
MANAGER_INSTRUCTIONS = """
|
286 |
+
You are the Research Manager Agent responsible for orchestrating a comprehensive research process with quality assurance.
|
287 |
+
|
288 |
+
Your workflow:
|
289 |
+
|
290 |
+
1. **PLAN**: Use plan_research to create a search strategy for the query
|
291 |
+
2. **SEARCH**: Use perform_search for each planned search item to gather information
|
292 |
+
3. **INITIAL REPORT**: Use write_initial_report to create a first draft from all search results
|
293 |
+
4. **EVALUATE**: Use evaluate_report to assess the quality of the initial report
|
294 |
+
5. **OPTIMIZE** (if needed): If evaluation shows needs_refinement=True, use optimize_report to improve it
|
295 |
+
6. **FINALIZE**: Use send_report_email_to_address to deliver the final report
|
296 |
+
|
297 |
+
Quality Standards:
|
298 |
+
- Only proceed to email if the report scores 7+ or has been optimized
|
299 |
+
- If a report needs refinement, always optimize it before sending
|
300 |
+
- Ensure comprehensive coverage of the original query
|
301 |
+
- Maintain high standards for accuracy and completeness
|
302 |
+
|
303 |
+
Be methodical and ensure each step completes successfully before proceeding to the next.
|
304 |
+
"""
|
305 |
+
|
306 |
+
# Function to create custom research agent with email options
|
307 |
+
def create_custom_research_agent(email_address: str = None, send_email: bool = False):
|
308 |
+
"""Create a research manager agent with custom email settings"""
|
309 |
+
|
310 |
+
if send_email and email_address:
|
311 |
+
# Include email sending in tools
|
312 |
+
tools = [
|
313 |
+
plan_research,
|
314 |
+
perform_search,
|
315 |
+
write_initial_report,
|
316 |
+
evaluate_report,
|
317 |
+
optimize_report,
|
318 |
+
send_report_email_to_address
|
319 |
+
]
|
320 |
+
|
321 |
+
instructions = f"""
|
322 |
+
You are the Research Manager Agent responsible for orchestrating a comprehensive research process with quality assurance.
|
323 |
+
|
324 |
+
Your workflow:
|
325 |
+
|
326 |
+
1. **PLAN**: Use plan_research to create a search strategy for the query
|
327 |
+
2. **SEARCH**: Use perform_search for each planned search item to gather information
|
328 |
+
3. **INITIAL REPORT**: Use write_initial_report to create a first draft from all search results
|
329 |
+
4. **EVALUATE**: Use evaluate_report to assess the quality of the initial report
|
330 |
+
5. **OPTIMIZE** (if needed): If evaluation shows needs_refinement=True, use optimize_report to improve it
|
331 |
+
6. **FINALIZE**: Use send_report_email_to_address with the report and recipient email "{email_address}" to deliver the final report
|
332 |
+
|
333 |
+
Quality Standards:
|
334 |
+
- Only proceed to email if the report scores 7+ or has been optimized
|
335 |
+
- If a report needs refinement, always optimize it before sending
|
336 |
+
- Ensure comprehensive coverage of the original query
|
337 |
+
- Maintain high standards for accuracy and completeness
|
338 |
+
|
339 |
+
IMPORTANT: When using send_report_email_to_address, you must provide both:
|
340 |
+
- The final report text as the first parameter
|
341 |
+
- The recipient email address "{email_address}" as the second parameter
|
342 |
+
|
343 |
+
Be methodical and ensure each step completes successfully before proceeding to the next.
|
344 |
+
The user has requested the report be emailed to: {email_address}
|
345 |
+
"""
|
346 |
+
else:
|
347 |
+
# Exclude email sending from tools
|
348 |
+
tools = [
|
349 |
+
plan_research,
|
350 |
+
perform_search,
|
351 |
+
write_initial_report,
|
352 |
+
evaluate_report,
|
353 |
+
optimize_report
|
354 |
+
]
|
355 |
+
|
356 |
+
instructions = """
|
357 |
+
You are the Research Manager Agent responsible for orchestrating a comprehensive research process with quality assurance.
|
358 |
+
|
359 |
+
Your workflow:
|
360 |
+
|
361 |
+
1. **PLAN**: Use plan_research to create a search strategy for the query
|
362 |
+
2. **SEARCH**: Use perform_search for each planned search item to gather information
|
363 |
+
3. **INITIAL REPORT**: Use write_initial_report to create a first draft from all search results
|
364 |
+
4. **EVALUATE**: Use evaluate_report to assess the quality of the initial report
|
365 |
+
5. **OPTIMIZE** (if needed): If evaluation shows needs_refinement=True, use optimize_report to improve it
|
366 |
+
6. **COMPLETE**: Return the final optimized report (do NOT send email - user chose not to receive email)
|
367 |
+
|
368 |
+
Quality Standards:
|
369 |
+
- Complete the process when report scores 7+ or has been optimized
|
370 |
+
- If a report needs refinement, always optimize it before completing
|
371 |
+
- Ensure comprehensive coverage of the original query
|
372 |
+
- Maintain high standards for accuracy and completeness
|
373 |
+
|
374 |
+
Be methodical and ensure each step completes successfully before proceeding to the next.
|
375 |
+
The user has chosen NOT to receive the report via email.
|
376 |
+
"""
|
377 |
+
|
378 |
+
return Agent(
|
379 |
+
name=f"Custom Research Manager Agent",
|
380 |
+
instructions=instructions,
|
381 |
+
tools=tools,
|
382 |
+
model="gpt-4o-mini",
|
383 |
+
handoff_description="Orchestrate comprehensive research with quality assurance and optional email delivery"
|
384 |
+
)
|
385 |
+
|
386 |
+
# Create the Research Manager Agent with agents-as-tools
|
387 |
+
ResearchManagerAgent = Agent(
|
388 |
+
name="Research Manager Agent",
|
389 |
+
instructions=MANAGER_INSTRUCTIONS,
|
390 |
+
tools=[
|
391 |
+
plan_research,
|
392 |
+
perform_search,
|
393 |
+
write_initial_report,
|
394 |
+
evaluate_report,
|
395 |
+
optimize_report,
|
396 |
+
send_report_email_to_address
|
397 |
+
],
|
398 |
+
model="gpt-4o-mini",
|
399 |
+
handoff_description="Orchestrate comprehensive research with quality assurance and optimization"
|
400 |
+
)
|
401 |
+
|
402 |
+
async def run_research_with_progress(query: str, email_address: str = None, send_email: bool = False) -> AsyncGenerator[str, None]:
|
403 |
+
"""Run research with step-by-step progress updates"""
|
404 |
+
trace_id = gen_trace_id()
|
405 |
+
|
406 |
+
yield f"🚀 **Starting Enhanced Research**\n\n**Query:** {query}\n\n**Trace ID:** {trace_id}\n\n---\n\n"
|
407 |
+
|
408 |
+
try:
|
409 |
+
with trace("Enhanced Research with Progress", trace_id=trace_id):
|
410 |
+
# Step 1: Planning
|
411 |
+
yield "📋 **Step 1/6:** Planning research strategy...\n\n*Analyzing your query and determining the best search approach*"
|
412 |
+
|
413 |
+
result = await Runner.run(planner_agent, f"Query: {query}")
|
414 |
+
search_plan = result.final_output_as(WebSearchPlan)
|
415 |
+
|
416 |
+
yield f"✅ **Planning Complete** - Will perform {len(search_plan.searches)} targeted searches\n\n---\n\n"
|
417 |
+
|
418 |
+
# Step 2: Searching
|
419 |
+
yield "🔍 **Step 2/6:** Conducting web searches...\n\n*Gathering information from multiple sources*"
|
420 |
+
|
421 |
+
search_results = []
|
422 |
+
for i, search_item in enumerate(search_plan.searches, 1):
|
423 |
+
yield f"🔍 **Search {i}/{len(search_plan.searches)}:** {search_item.query}\n\n*{search_item.reason}*"
|
424 |
+
|
425 |
+
try:
|
426 |
+
input_text = f"Search term: {search_item.query}\nReason for searching: {search_item.reason}"
|
427 |
+
result = await Runner.run(search_agent, input_text)
|
428 |
+
search_results.append(str(result.final_output))
|
429 |
+
yield f"✅ **Search {i} Complete**\n\n"
|
430 |
+
except Exception as e:
|
431 |
+
yield f"⚠️ **Search {i} Failed:** {str(e)}\n\n"
|
432 |
+
search_results.append(f"Search failed: {str(e)}")
|
433 |
+
|
434 |
+
yield "✅ **All Searches Complete**\n\n---\n\n"
|
435 |
+
|
436 |
+
# Step 3: Writing Initial Report
|
437 |
+
yield "✍️ **Step 3/6:** Writing initial research report...\n\n*Analyzing and synthesizing all gathered information*"
|
438 |
+
|
439 |
+
combined_results = "\n\n".join(search_results)
|
440 |
+
input_text = f"Original query: {query}\nSummarized search results: {combined_results}"
|
441 |
+
result = await Runner.run(writer_agent, input_text)
|
442 |
+
report_data = result.final_output_as(ReportData)
|
443 |
+
|
444 |
+
yield "✅ **Initial Report Complete**\n\n---\n\n"
|
445 |
+
|
446 |
+
# Step 4: Evaluating Quality
|
447 |
+
yield "🔍 **Step 4/6:** Evaluating report quality...\n\n*AI quality assessment in progress*"
|
448 |
+
|
449 |
+
input_text = f"Original Query: {query}\n\nReport to Evaluate:\n{report_data.markdown_report}"
|
450 |
+
result = await Runner.run(evaluator_agent, input_text)
|
451 |
+
evaluation = result.final_output_as(EvaluationResult)
|
452 |
+
|
453 |
+
yield f"✅ **Evaluation Complete** - Score: {evaluation.overall_score}/10\n\n"
|
454 |
+
|
455 |
+
final_report = report_data.markdown_report
|
456 |
+
|
457 |
+
# Step 5: Optimization (if needed)
|
458 |
+
if evaluation.needs_refinement:
|
459 |
+
yield "🔧 **Step 5/6:** Optimizing report quality...\n\n*Improving report based on evaluation feedback*"
|
460 |
+
|
461 |
+
feedback = f"Score: {evaluation.overall_score}/10\nWeaknesses: {evaluation.weaknesses}\nSuggestions: {evaluation.suggestions}"
|
462 |
+
input_text = f"""Original Query: {query}
|
463 |
+
|
464 |
+
Original Report:
|
465 |
+
{report_data.markdown_report}
|
466 |
+
|
467 |
+
Evaluation Feedback:
|
468 |
+
{feedback}
|
469 |
+
|
470 |
+
Please improve the report based on this feedback."""
|
471 |
+
|
472 |
+
result = await Runner.run(optimizer_agent, input_text)
|
473 |
+
optimized = result.final_output_as(OptimizedReport)
|
474 |
+
final_report = optimized.improved_markdown_report
|
475 |
+
|
476 |
+
yield "✅ **Optimization Complete** - Report quality improved\n\n---\n\n"
|
477 |
+
else:
|
478 |
+
yield "✅ **No Optimization Needed** - Report quality is excellent\n\n---\n\n"
|
479 |
+
|
480 |
+
# Step 6: Email Delivery (if requested)
|
481 |
+
if send_email and email_address:
|
482 |
+
yield f"📧 **Step 6/6:** Sending report to {email_address}...\n\n*Preparing and delivering your research report*"
|
483 |
+
|
484 |
+
try:
|
485 |
+
# Call the regular function directly
|
486 |
+
result = await _send_report_email_to_address(final_report, email_address)
|
487 |
+
yield f"✅ **Email Sent Successfully** to {email_address}\n\n---\n\n"
|
488 |
+
except Exception as e:
|
489 |
+
yield f"❌ **Email Failed:** {str(e)}\n\n---\n\n"
|
490 |
+
else:
|
491 |
+
yield "📄 **Step 6/6:** Finalizing report...\n\n*Email delivery not requested*\n\n---\n\n"
|
492 |
+
|
493 |
+
# Final result
|
494 |
+
yield f"""🎉 **Research Complete!**
|
495 |
+
|
496 |
+
**📊 Final Report:**
|
497 |
+
|
498 |
+
{final_report}
|
499 |
+
|
500 |
+
**🔗 View Full Trace:** https://platform.openai.com/traces/trace?trace_id={trace_id}
|
501 |
+
|
502 |
+
---
|
503 |
+
*Enhanced research completed with quality assurance*"""
|
504 |
+
|
505 |
+
except Exception as e:
|
506 |
+
yield f"❌ **Error during research:** {str(e)}"
|
search_agent.py
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from agents import Agent, WebSearchTool, ModelSettings
|
2 |
+
|
3 |
+
INSTRUCTIONS = (
|
4 |
+
"You are a research assistant. Given a search term, you search the web for that term and "
|
5 |
+
"produce a concise summary of the results. The summary must be 2-3 paragraphs and less than 300 "
|
6 |
+
"words. Capture the main points. Write succintly, no need to have complete sentences or good "
|
7 |
+
"grammar. This will be consumed by someone synthesizing a report, so it's vital you capture the "
|
8 |
+
"essence and ignore any fluff. "
|
9 |
+
|
10 |
+
"IMPORTANT: Always preserve and include the source URLs that are provided in the search results. "
|
11 |
+
"When you mention information from a source, include the URL reference in the format: (source.com) "
|
12 |
+
"or [Title](URL). Keep all source links intact in your summary."
|
13 |
+
|
14 |
+
"Do not include any additional commentary other than the summary itself with preserved source links."
|
15 |
+
)
|
16 |
+
|
17 |
+
search_agent = Agent(
|
18 |
+
name="Search agent",
|
19 |
+
instructions=INSTRUCTIONS,
|
20 |
+
tools=[WebSearchTool(search_context_size="low")],
|
21 |
+
model="gpt-4o-mini",
|
22 |
+
model_settings=ModelSettings(tool_choice="required"),
|
23 |
+
)
|
writer_agent.py
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from pydantic import BaseModel
|
2 |
+
from agents import Agent
|
3 |
+
|
4 |
+
INSTRUCTIONS = (
|
5 |
+
"You are a senior researcher tasked with writing a cohesive report for a research query. "
|
6 |
+
"You will be provided with the original query, and some initial research done by a research assistant.\n"
|
7 |
+
"You should first come up with an outline for the report that describes the structure and "
|
8 |
+
"flow of the report. Then, generate the report and return that as your final output.\n"
|
9 |
+
"The final output should be in markdown format, and it should be lengthy and detailed. Aim "
|
10 |
+
"for 5-10 pages of content, at least 1000 words.\n\n"
|
11 |
+
|
12 |
+
"IMPORTANT SOURCE HANDLING:\n"
|
13 |
+
"- Preserve all source URLs and references from the research summaries\n"
|
14 |
+
"- Include inline citations throughout your report using the format: [Source Name](URL)\n"
|
15 |
+
"- At the end of your report, create a dedicated '## Sources and References' section\n"
|
16 |
+
"- In the Sources section, list all unique URLs mentioned in the report in a numbered list\n"
|
17 |
+
"- Format sources as: '1. [Website Name/Title](full URL)'\n"
|
18 |
+
"- Ensure no source links are lost during synthesis\n"
|
19 |
+
"- If you cannot find source URLs in the research, note 'Sources: Based on web research summaries'"
|
20 |
+
)
|
21 |
+
|
22 |
+
|
23 |
+
class ReportData(BaseModel):
|
24 |
+
short_summary: str
|
25 |
+
"""A short 2-3 sentence summary of the findings."""
|
26 |
+
|
27 |
+
markdown_report: str
|
28 |
+
"""The final report"""
|
29 |
+
|
30 |
+
follow_up_questions: list[str]
|
31 |
+
"""Suggested topics to research further"""
|
32 |
+
|
33 |
+
|
34 |
+
writer_agent = Agent(
|
35 |
+
name="WriterAgent",
|
36 |
+
instructions=INSTRUCTIONS,
|
37 |
+
model="gpt-4o-mini",
|
38 |
+
output_type=ReportData,
|
39 |
+
)
|