Upload folder using huggingface_hub
Browse files- README.md +42 -7
- clarifier_agent.py +11 -6
- deep_research.py +82 -83
- email_agent.py +11 -6
- evaluator_agent.py +22 -12
- planner_agent.py +11 -6
- research_manager.py +67 -21
- search_agent.py +12 -7
- writer_agent.py +11 -6
README.md
CHANGED
@@ -18,6 +18,8 @@ A comprehensive AI-powered research assistant that delivers high-quality, well-r
|
|
18 |
- **Smart Optimization**: Reports scoring below 7/10 are automatically improved
|
19 |
- **Multi-Strategy Search**: Uses multiple search approaches for comprehensive coverage
|
20 |
- **Email Delivery**: Optional email delivery of research reports
|
|
|
|
|
21 |
|
22 |
### ๐ฏ Research Modes
|
23 |
|
@@ -38,12 +40,19 @@ A comprehensive AI-powered research assistant that delivers high-quality, well-r
|
|
38 |
|
39 |
## ๐ ๏ธ Setup
|
40 |
|
41 |
-
###
|
42 |
|
43 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
```bash
|
46 |
-
#
|
47 |
OPENAI_API_KEY=your_openai_api_key_here
|
48 |
|
49 |
# Optional - SendGrid for email delivery
|
@@ -84,6 +93,21 @@ SENDGRID_FROM_EMAIL=your_verified_sender_email@example.com
|
|
84 |
python app.py
|
85 |
```
|
86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
## ๐ Quality Assurance System
|
88 |
|
89 |
Our enhanced research system includes automatic quality evaluation:
|
@@ -103,15 +127,22 @@ Our enhanced research system includes automatic quality evaluation:
|
|
103 |
|
104 |
## ๐ฎ How to Use
|
105 |
|
106 |
-
1. **
|
107 |
-
|
108 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
109 |
- Click "๐ Start Research" for interactive clarification mode
|
110 |
- Use "๐ค Enhanced Research" for direct advanced research
|
111 |
- Use "โก Quick Research" for fast results
|
112 |
|
113 |
-
|
114 |
- View comprehensive research report
|
|
|
115 |
- Receive email delivery (if configured)
|
116 |
- Access detailed trace logs for transparency
|
117 |
|
@@ -154,6 +185,10 @@ We welcome contributions! Areas for improvement:
|
|
154 |
- UI/UX improvements
|
155 |
- Performance optimizations
|
156 |
|
|
|
|
|
|
|
|
|
157 |
## ๐ License
|
158 |
|
159 |
This project is licensed under the MIT License - see the LICENSE file for details.
|
|
|
18 |
- **Smart Optimization**: Reports scoring below 7/10 are automatically improved
|
19 |
- **Multi-Strategy Search**: Uses multiple search approaches for comprehensive coverage
|
20 |
- **Email Delivery**: Optional email delivery of research reports
|
21 |
+
- **BYOAPI Key Support**: Use your own OpenAI API key to avoid rate limits
|
22 |
+
- **Model Selection**: Choose from multiple OpenAI models (GPT-4o, GPT-4, GPT-3.5, O1, etc.)
|
23 |
|
24 |
### ๐ฏ Research Modes
|
25 |
|
|
|
40 |
|
41 |
## ๐ ๏ธ Setup
|
42 |
|
43 |
+
### API Configuration Options
|
44 |
|
45 |
+
#### Option 1: Provide Your Own API Key (Recommended)
|
46 |
+
- Enter your OpenAI API key directly in the interface
|
47 |
+
- Choose your preferred model from the dropdown
|
48 |
+
- Avoids rate limits and provides more control
|
49 |
+
|
50 |
+
#### Option 2: Environment Variables (For Development)
|
51 |
+
|
52 |
+
You can also set up environment variables:
|
53 |
|
54 |
```bash
|
55 |
+
# Optional - Default OpenAI API for research
|
56 |
OPENAI_API_KEY=your_openai_api_key_here
|
57 |
|
58 |
# Optional - SendGrid for email delivery
|
|
|
93 |
python app.py
|
94 |
```
|
95 |
|
96 |
+
## ๐ Benefits of BYOAPI Key
|
97 |
+
|
98 |
+
### Why Use Your Own API Key?
|
99 |
+
- **No Rate Limits**: Avoid 429 errors from shared API quotas
|
100 |
+
- **Cost Control**: Pay only for what you use
|
101 |
+
- **Model Choice**: Select the best model for your needs and budget
|
102 |
+
- **Faster Processing**: Direct access without queuing
|
103 |
+
- **Privacy**: Your queries stay between you and OpenAI
|
104 |
+
|
105 |
+
### Model Recommendations
|
106 |
+
- **GPT-4o-mini**: Best cost-efficiency for most research tasks
|
107 |
+
- **GPT-4o**: Balanced performance and speed
|
108 |
+
- **GPT-4**: High quality for complex analysis
|
109 |
+
- **O1-Preview**: Advanced reasoning for technical topics
|
110 |
+
|
111 |
## ๐ Quality Assurance System
|
112 |
|
113 |
Our enhanced research system includes automatic quality evaluation:
|
|
|
127 |
|
128 |
## ๐ฎ How to Use
|
129 |
|
130 |
+
1. **Configure API Settings**:
|
131 |
+
- Enter your OpenAI API key
|
132 |
+
- Select your preferred model (GPT-4o-mini recommended for cost efficiency)
|
133 |
+
|
134 |
+
2. **Enter Your Research Query**: Describe what you want to research
|
135 |
+
|
136 |
+
3. **Configure Email (Optional)**: Set up email delivery if desired
|
137 |
+
|
138 |
+
4. **Choose Research Mode**:
|
139 |
- Click "๐ Start Research" for interactive clarification mode
|
140 |
- Use "๐ค Enhanced Research" for direct advanced research
|
141 |
- Use "โก Quick Research" for fast results
|
142 |
|
143 |
+
5. **Get Results**:
|
144 |
- View comprehensive research report
|
145 |
+
- See which model was used for the research
|
146 |
- Receive email delivery (if configured)
|
147 |
- Access detailed trace logs for transparency
|
148 |
|
|
|
185 |
- UI/UX improvements
|
186 |
- Performance optimizations
|
187 |
|
188 |
+
## ๐ Acknowledgments
|
189 |
+
|
190 |
+
**Special thanks to [Ifiok Moses (greattkiffy)](https://github.com/greattkiffy)** for the valuable feedback that led to the implementation of BYOAPI key support and model selection features. This enhancement significantly improves user experience by eliminating rate limits and providing greater control over API usage.
|
191 |
+
|
192 |
## ๐ License
|
193 |
|
194 |
This project is licensed under the MIT License - see the LICENSE file for details.
|
clarifier_agent.py
CHANGED
@@ -15,9 +15,14 @@ that will help focus and refine the research. These questions should help unders
|
|
15 |
Return your response as JSON matching the ClarificationData model with exactly 3 questions.
|
16 |
"""
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
15 |
Return your response as JSON matching the ClarificationData model with exactly 3 questions.
|
16 |
"""
|
17 |
|
18 |
+
def create_clarifier_agent(model: str = "gpt-4o-mini"):
|
19 |
+
"""Create a clarifier agent with configurable model"""
|
20 |
+
return Agent(
|
21 |
+
name="ClarifierAgent",
|
22 |
+
instructions=CLARIFY_INSTRUCTIONS,
|
23 |
+
model=model,
|
24 |
+
output_type=ClarificationData,
|
25 |
+
)
|
26 |
+
|
27 |
+
# Default clarifier agent for backward compatibility
|
28 |
+
clarifier_agent = create_clarifier_agent()
|
deep_research.py
CHANGED
@@ -2,19 +2,34 @@ import gradio as gr
|
|
2 |
from dotenv import load_dotenv
|
3 |
from research_manager import ResearchManager, ResearchManagerAgent
|
4 |
from agents import Runner, trace, gen_trace_id
|
|
|
5 |
|
6 |
load_dotenv(override=True)
|
7 |
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
"""Handle initial query submission - generate clarifying questions with progress"""
|
10 |
if not query.strip():
|
11 |
return "Please enter a research query.", gr.update(visible=False), gr.update(visible=False), current_state
|
12 |
|
|
|
|
|
|
|
13 |
try:
|
14 |
# Show progress
|
15 |
progress_update = "๐ **Generating clarifying questions...**\n\nPlease wait while our AI analyzes your query and creates focused questions to improve the research quality."
|
16 |
|
17 |
-
research_manager = ResearchManager()
|
18 |
result = await research_manager.run_with_clarification(query)
|
19 |
|
20 |
# Format questions for display
|
@@ -25,7 +40,9 @@ async def handle_query_submission(query: str, current_state: dict):
|
|
25 |
new_state = {
|
26 |
"query": query,
|
27 |
"questions": result["questions"],
|
28 |
-
"trace_id": result["trace_id"]
|
|
|
|
|
29 |
}
|
30 |
|
31 |
return display_text, gr.update(visible=True), gr.update(visible=True), new_state
|
@@ -41,6 +58,12 @@ async def handle_research_with_answers(answers: str, current_state: dict, email_
|
|
41 |
if not answers.strip():
|
42 |
return "Please provide answers to the clarifying questions.", current_state
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
try:
|
45 |
# Show progress
|
46 |
progress_message = f"""๐ **Research in Progress...**
|
@@ -71,10 +94,12 @@ Clarifications provided:
|
|
71 |
|
72 |
Please use these clarifications to focus and refine the research approach."""
|
73 |
|
74 |
-
# Create custom agent with email settings
|
75 |
custom_agent = create_custom_research_agent(
|
76 |
email_address=email_address if send_email else None,
|
77 |
-
send_email=send_email
|
|
|
|
|
78 |
)
|
79 |
|
80 |
# Run research with custom agent
|
@@ -96,6 +121,7 @@ Please use these clarifications to focus and refine the research approach."""
|
|
96 |
final_report = f"""**โ
Research Complete!**
|
97 |
|
98 |
**๐ Trace ID:** {trace_id}
|
|
|
99 |
|
100 |
**Original Query:** {current_state['query']}
|
101 |
|
@@ -113,11 +139,14 @@ Please use these clarifications to focus and refine the research approach."""
|
|
113 |
except Exception as e:
|
114 |
return f"โ Error during research: {str(e)}", current_state
|
115 |
|
116 |
-
async def run_direct_research(query: str, email_address: str = "", send_email: bool = False):
|
117 |
"""Run research directly without clarification using the new agent-based system"""
|
118 |
if not query.strip():
|
119 |
return "Please enter a research query."
|
120 |
|
|
|
|
|
|
|
121 |
try:
|
122 |
trace_id = gen_trace_id()
|
123 |
with trace("Enhanced Research Manager", trace_id=trace_id):
|
@@ -126,10 +155,12 @@ async def run_direct_research(query: str, email_address: str = "", send_email: b
|
|
126 |
# Import the function here to avoid circular imports
|
127 |
from research_manager import create_custom_research_agent
|
128 |
|
129 |
-
# Create agent with email settings
|
130 |
custom_agent = create_custom_research_agent(
|
131 |
email_address=email_address if send_email else None,
|
132 |
-
send_email=send_email
|
|
|
|
|
133 |
)
|
134 |
|
135 |
# Use the custom agent
|
@@ -149,6 +180,7 @@ async def run_direct_research(query: str, email_address: str = "", send_email: b
|
|
149 |
return f"""**โ
Research Complete!**
|
150 |
|
151 |
**๐ Trace ID:** {trace_id}
|
|
|
152 |
**๐ View Detailed Trace:** https://platform.openai.com/traces/trace?trace_id={trace_id}
|
153 |
|
154 |
**๐ Enhanced Research Report with Quality Assurance:**
|
@@ -164,23 +196,28 @@ async def run_direct_research(query: str, email_address: str = "", send_email: b
|
|
164 |
import traceback
|
165 |
error_details = traceback.format_exc()
|
166 |
print(f"Error details: {error_details}")
|
167 |
-
return f"โ Error during research: {str(e)}\n\nPlease try the Legacy Quick Research option if this persists."
|
168 |
|
169 |
-
async def run_legacy_research(query: str, email_address: str, send_email: bool):
|
170 |
"""Run research using the original ResearchManager class with email options"""
|
171 |
if not query.strip():
|
172 |
return "Please enter a research query."
|
173 |
|
|
|
|
|
|
|
174 |
try:
|
175 |
# Use the enhanced system but call it "legacy" for the user
|
176 |
trace_id = gen_trace_id()
|
177 |
with trace("Quick Research", trace_id=trace_id):
|
178 |
from research_manager import create_custom_research_agent
|
179 |
|
180 |
-
# Create agent with email settings
|
181 |
custom_agent = create_custom_research_agent(
|
182 |
email_address=email_address if send_email else None,
|
183 |
-
send_email=send_email
|
|
|
|
|
184 |
)
|
185 |
|
186 |
result = await Runner.run(
|
@@ -199,6 +236,7 @@ async def run_legacy_research(query: str, email_address: str, send_email: bool):
|
|
199 |
return f"""**โ
Quick Research Complete!**
|
200 |
|
201 |
**๐ Trace ID:** {trace_id}
|
|
|
202 |
|
203 |
**๐ Research Report:**
|
204 |
|
@@ -207,80 +245,21 @@ async def run_legacy_research(query: str, email_address: str, send_email: bool):
|
|
207 |
{email_status}
|
208 |
|
209 |
---
|
210 |
-
*
|
211 |
-
|
212 |
-
except Exception as e:
|
213 |
-
return f"โ Error during research: {str(e)}"
|
214 |
-
|
215 |
-
async def run_enhanced_research_with_progress(query: str, email_address: str = "", send_email: bool = False):
|
216 |
-
"""Run enhanced research with real-time step-by-step progress updates"""
|
217 |
-
if not query.strip():
|
218 |
-
yield "Please enter a research query."
|
219 |
-
return
|
220 |
-
|
221 |
-
# Import the new progress function
|
222 |
-
from research_manager import run_research_with_progress
|
223 |
-
|
224 |
-
try:
|
225 |
-
# Collect all progress updates
|
226 |
-
progress_updates = []
|
227 |
-
async for update in run_research_with_progress(
|
228 |
-
query=query,
|
229 |
-
email_address=email_address if send_email else None,
|
230 |
-
send_email=send_email
|
231 |
-
):
|
232 |
-
progress_updates.append(update)
|
233 |
-
# Return current progress to update the UI
|
234 |
-
yield "\n\n".join(progress_updates)
|
235 |
|
236 |
except Exception as e:
|
237 |
import traceback
|
238 |
error_details = traceback.format_exc()
|
239 |
print(f"Error details: {error_details}")
|
240 |
-
|
241 |
|
242 |
-
async def
|
243 |
-
"""
|
244 |
-
|
245 |
-
yield "Please start by entering a research query first."
|
246 |
-
return
|
247 |
-
|
248 |
-
if not answers.strip():
|
249 |
-
yield "Please provide answers to the clarifying questions."
|
250 |
-
return
|
251 |
-
|
252 |
-
# Import the new progress function
|
253 |
-
from research_manager import run_research_with_progress
|
254 |
-
|
255 |
-
try:
|
256 |
-
# Parse answers (one per line)
|
257 |
-
answer_list = [line.strip() for line in answers.split('\n') if line.strip()]
|
258 |
-
|
259 |
-
# Format the query with clarifications
|
260 |
-
clarified_query = f"""Original query: {current_state['query']}
|
261 |
-
|
262 |
-
Clarifications provided:
|
263 |
-
{chr(10).join([f"{i+1}. {answer}" for i, answer in enumerate(answer_list)])}
|
264 |
|
265 |
-
|
266 |
-
|
267 |
-
|
268 |
-
yield f"๐ **Starting Focused Research with Clarifications**\n\n**Original Query:** {current_state['query']}\n\n**Your Clarifications:**\n{chr(10).join([f'โข {answer}' for answer in answer_list if answer])}\n\n---\n\n"
|
269 |
-
|
270 |
-
# Collect all progress updates
|
271 |
-
progress_updates = [f"๐ **Starting Focused Research with Clarifications**\n\n**Original Query:** {current_state['query']}\n\n**Your Clarifications:**\n{chr(10).join([f'โข {answer}' for answer in answer_list if answer])}\n\n---\n\n"]
|
272 |
-
|
273 |
-
async for update in run_research_with_progress(
|
274 |
-
query=clarified_query,
|
275 |
-
email_address=email_address if send_email else None,
|
276 |
-
send_email=send_email
|
277 |
-
):
|
278 |
-
progress_updates.append(update)
|
279 |
-
# Return current progress to update the UI
|
280 |
-
yield "\n\n".join(progress_updates)
|
281 |
-
|
282 |
-
except Exception as e:
|
283 |
-
yield f"โ Error during research: {str(e)}"
|
284 |
|
285 |
# Custom CSS for better readability and contrast
|
286 |
custom_css = """
|
@@ -557,6 +536,26 @@ with gr.Blocks(theme=gr.themes.Default(primary_hue="blue"), css=custom_css) as u
|
|
557 |
|
558 |
# Main Research Configuration Block
|
559 |
with gr.Column():
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
560 |
query_textbox = gr.Textbox(
|
561 |
label="Research Query",
|
562 |
placeholder="What would you like to research? (e.g., 'Latest developments in renewable energy')",
|
@@ -634,13 +633,13 @@ with gr.Blocks(theme=gr.themes.Default(primary_hue="blue"), css=custom_css) as u
|
|
634 |
# Event handlers
|
635 |
submit_button.click(
|
636 |
fn=handle_query_submission,
|
637 |
-
inputs=[query_textbox, state],
|
638 |
outputs=[output_area, clarification_row, research_button, state]
|
639 |
)
|
640 |
|
641 |
query_textbox.submit(
|
642 |
fn=handle_query_submission,
|
643 |
-
inputs=[query_textbox, state],
|
644 |
outputs=[output_area, clarification_row, research_button, state]
|
645 |
)
|
646 |
|
@@ -658,13 +657,13 @@ with gr.Blocks(theme=gr.themes.Default(primary_hue="blue"), css=custom_css) as u
|
|
658 |
|
659 |
enhanced_button.click(
|
660 |
fn=run_enhanced_research_with_progress,
|
661 |
-
inputs=[query_textbox, email_textbox, send_email_checkbox],
|
662 |
outputs=[output_area]
|
663 |
)
|
664 |
|
665 |
direct_button.click(
|
666 |
fn=run_legacy_research,
|
667 |
-
inputs=[query_textbox, email_textbox, send_email_checkbox],
|
668 |
outputs=[output_area]
|
669 |
)
|
670 |
|
|
|
2 |
from dotenv import load_dotenv
|
3 |
from research_manager import ResearchManager, ResearchManagerAgent
|
4 |
from agents import Runner, trace, gen_trace_id
|
5 |
+
import os
|
6 |
|
7 |
load_dotenv(override=True)
|
8 |
|
9 |
+
# Available models for user selection
|
10 |
+
AVAILABLE_MODELS = [
|
11 |
+
"gpt-4o",
|
12 |
+
"gpt-4o-mini",
|
13 |
+
"gpt-4-turbo",
|
14 |
+
"gpt-4",
|
15 |
+
"gpt-3.5-turbo",
|
16 |
+
"o1-preview",
|
17 |
+
"o1-mini"
|
18 |
+
]
|
19 |
+
|
20 |
+
async def handle_query_submission(query: str, current_state: dict, api_key: str, model: str):
|
21 |
"""Handle initial query submission - generate clarifying questions with progress"""
|
22 |
if not query.strip():
|
23 |
return "Please enter a research query.", gr.update(visible=False), gr.update(visible=False), current_state
|
24 |
|
25 |
+
if not api_key.strip():
|
26 |
+
return "Please provide your OpenAI API key.", gr.update(visible=False), gr.update(visible=False), current_state
|
27 |
+
|
28 |
try:
|
29 |
# Show progress
|
30 |
progress_update = "๐ **Generating clarifying questions...**\n\nPlease wait while our AI analyzes your query and creates focused questions to improve the research quality."
|
31 |
|
32 |
+
research_manager = ResearchManager(api_key=api_key, model=model)
|
33 |
result = await research_manager.run_with_clarification(query)
|
34 |
|
35 |
# Format questions for display
|
|
|
40 |
new_state = {
|
41 |
"query": query,
|
42 |
"questions": result["questions"],
|
43 |
+
"trace_id": result["trace_id"],
|
44 |
+
"api_key": api_key,
|
45 |
+
"model": model
|
46 |
}
|
47 |
|
48 |
return display_text, gr.update(visible=True), gr.update(visible=True), new_state
|
|
|
58 |
if not answers.strip():
|
59 |
return "Please provide answers to the clarifying questions.", current_state
|
60 |
|
61 |
+
api_key = current_state.get("api_key", "")
|
62 |
+
model = current_state.get("model", "gpt-4o-mini")
|
63 |
+
|
64 |
+
if not api_key:
|
65 |
+
return "API key missing. Please restart with your API key.", current_state
|
66 |
+
|
67 |
try:
|
68 |
# Show progress
|
69 |
progress_message = f"""๐ **Research in Progress...**
|
|
|
94 |
|
95 |
Please use these clarifications to focus and refine the research approach."""
|
96 |
|
97 |
+
# Create custom agent with email settings and API configuration
|
98 |
custom_agent = create_custom_research_agent(
|
99 |
email_address=email_address if send_email else None,
|
100 |
+
send_email=send_email,
|
101 |
+
api_key=api_key,
|
102 |
+
model=model
|
103 |
)
|
104 |
|
105 |
# Run research with custom agent
|
|
|
121 |
final_report = f"""**โ
Research Complete!**
|
122 |
|
123 |
**๐ Trace ID:** {trace_id}
|
124 |
+
**๐ค Model Used:** {model}
|
125 |
|
126 |
**Original Query:** {current_state['query']}
|
127 |
|
|
|
139 |
except Exception as e:
|
140 |
return f"โ Error during research: {str(e)}", current_state
|
141 |
|
142 |
+
async def run_direct_research(query: str, api_key: str, model: str, email_address: str = "", send_email: bool = False):
|
143 |
"""Run research directly without clarification using the new agent-based system"""
|
144 |
if not query.strip():
|
145 |
return "Please enter a research query."
|
146 |
|
147 |
+
if not api_key.strip():
|
148 |
+
return "Please provide your OpenAI API key."
|
149 |
+
|
150 |
try:
|
151 |
trace_id = gen_trace_id()
|
152 |
with trace("Enhanced Research Manager", trace_id=trace_id):
|
|
|
155 |
# Import the function here to avoid circular imports
|
156 |
from research_manager import create_custom_research_agent
|
157 |
|
158 |
+
# Create agent with email settings and API configuration
|
159 |
custom_agent = create_custom_research_agent(
|
160 |
email_address=email_address if send_email else None,
|
161 |
+
send_email=send_email,
|
162 |
+
api_key=api_key,
|
163 |
+
model=model
|
164 |
)
|
165 |
|
166 |
# Use the custom agent
|
|
|
180 |
return f"""**โ
Research Complete!**
|
181 |
|
182 |
**๐ Trace ID:** {trace_id}
|
183 |
+
**๐ค Model Used:** {model}
|
184 |
**๐ View Detailed Trace:** https://platform.openai.com/traces/trace?trace_id={trace_id}
|
185 |
|
186 |
**๐ Enhanced Research Report with Quality Assurance:**
|
|
|
196 |
import traceback
|
197 |
error_details = traceback.format_exc()
|
198 |
print(f"Error details: {error_details}")
|
199 |
+
return f"โ Error during research: {str(e)}\n\nPlease check your API key and model selection, or try the Legacy Quick Research option if this persists."
|
200 |
|
201 |
+
async def run_legacy_research(query: str, api_key: str, model: str, email_address: str, send_email: bool):
|
202 |
"""Run research using the original ResearchManager class with email options"""
|
203 |
if not query.strip():
|
204 |
return "Please enter a research query."
|
205 |
|
206 |
+
if not api_key.strip():
|
207 |
+
return "Please provide your OpenAI API key."
|
208 |
+
|
209 |
try:
|
210 |
# Use the enhanced system but call it "legacy" for the user
|
211 |
trace_id = gen_trace_id()
|
212 |
with trace("Quick Research", trace_id=trace_id):
|
213 |
from research_manager import create_custom_research_agent
|
214 |
|
215 |
+
# Create agent with email settings and API configuration
|
216 |
custom_agent = create_custom_research_agent(
|
217 |
email_address=email_address if send_email else None,
|
218 |
+
send_email=send_email,
|
219 |
+
api_key=api_key,
|
220 |
+
model=model
|
221 |
)
|
222 |
|
223 |
result = await Runner.run(
|
|
|
236 |
return f"""**โ
Quick Research Complete!**
|
237 |
|
238 |
**๐ Trace ID:** {trace_id}
|
239 |
+
**๐ค Model Used:** {model}
|
240 |
|
241 |
**๐ Research Report:**
|
242 |
|
|
|
245 |
{email_status}
|
246 |
|
247 |
---
|
248 |
+
*Research completed using streamlined research system.*"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
249 |
|
250 |
except Exception as e:
|
251 |
import traceback
|
252 |
error_details = traceback.format_exc()
|
253 |
print(f"Error details: {error_details}")
|
254 |
+
return f"โ Error during research: {str(e)}\n\nPlease check your API key and model selection."
|
255 |
|
256 |
+
async def run_enhanced_research_with_progress(query: str, api_key: str, model: str, email_address: str = "", send_email: bool = False):
|
257 |
+
"""Run enhanced research with progress tracking"""
|
258 |
+
return await run_direct_research(query, api_key, model, email_address, send_email)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
259 |
|
260 |
+
async def run_clarified_research_with_progress(answers: str, current_state: dict, email_address: str, send_email: bool):
|
261 |
+
"""Run research with clarification answers and progress tracking"""
|
262 |
+
return await handle_research_with_answers(answers, current_state, email_address, send_email)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
263 |
|
264 |
# Custom CSS for better readability and contrast
|
265 |
custom_css = """
|
|
|
536 |
|
537 |
# Main Research Configuration Block
|
538 |
with gr.Column():
|
539 |
+
# API Configuration Section
|
540 |
+
gr.Markdown("### ๐ API Configuration")
|
541 |
+
with gr.Row():
|
542 |
+
with gr.Column(scale=2):
|
543 |
+
api_key_textbox = gr.Textbox(
|
544 |
+
label="OpenAI API Key",
|
545 |
+
placeholder="sk-...",
|
546 |
+
type="password",
|
547 |
+
lines=1,
|
548 |
+
info="Your OpenAI API key (required to avoid rate limits)"
|
549 |
+
)
|
550 |
+
with gr.Column(scale=1):
|
551 |
+
model_textbox = gr.Dropdown(
|
552 |
+
label="Model Selection",
|
553 |
+
choices=AVAILABLE_MODELS,
|
554 |
+
value="gpt-4o-mini",
|
555 |
+
info="Choose your preferred OpenAI model"
|
556 |
+
)
|
557 |
+
|
558 |
+
gr.Markdown("### ๐ Research Query")
|
559 |
query_textbox = gr.Textbox(
|
560 |
label="Research Query",
|
561 |
placeholder="What would you like to research? (e.g., 'Latest developments in renewable energy')",
|
|
|
633 |
# Event handlers
|
634 |
submit_button.click(
|
635 |
fn=handle_query_submission,
|
636 |
+
inputs=[query_textbox, state, api_key_textbox, model_textbox],
|
637 |
outputs=[output_area, clarification_row, research_button, state]
|
638 |
)
|
639 |
|
640 |
query_textbox.submit(
|
641 |
fn=handle_query_submission,
|
642 |
+
inputs=[query_textbox, state, api_key_textbox, model_textbox],
|
643 |
outputs=[output_area, clarification_row, research_button, state]
|
644 |
)
|
645 |
|
|
|
657 |
|
658 |
enhanced_button.click(
|
659 |
fn=run_enhanced_research_with_progress,
|
660 |
+
inputs=[query_textbox, api_key_textbox, model_textbox, email_textbox, send_email_checkbox],
|
661 |
outputs=[output_area]
|
662 |
)
|
663 |
|
664 |
direct_button.click(
|
665 |
fn=run_legacy_research,
|
666 |
+
inputs=[query_textbox, api_key_textbox, model_textbox, email_textbox, send_email_checkbox],
|
667 |
outputs=[output_area]
|
668 |
)
|
669 |
|
email_agent.py
CHANGED
@@ -29,9 +29,14 @@ INSTRUCTIONS = """You are able to send a nicely formatted HTML email based on a
|
|
29 |
You will be provided with a detailed report. You should use your tool to send one email, providing the
|
30 |
report converted into clean, well presented HTML with an appropriate subject line."""
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
29 |
You will be provided with a detailed report. You should use your tool to send one email, providing the
|
30 |
report converted into clean, well presented HTML with an appropriate subject line."""
|
31 |
|
32 |
+
def create_email_agent(model: str = "gpt-4o-mini"):
|
33 |
+
"""Create an email agent with configurable model"""
|
34 |
+
return Agent(
|
35 |
+
name="Email agent",
|
36 |
+
instructions=INSTRUCTIONS,
|
37 |
+
tools=[send_email],
|
38 |
+
model=model,
|
39 |
+
)
|
40 |
+
|
41 |
+
# Default email agent for backward compatibility
|
42 |
+
email_agent = create_email_agent()
|
evaluator_agent.py
CHANGED
@@ -43,12 +43,17 @@ CRITICAL: A report without proper source citations should not score above 6, reg
|
|
43 |
If needs_refinement is True, provide specific, actionable requirements for improvement.
|
44 |
"""
|
45 |
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
|
54 |
class OptimizedReport(BaseModel):
|
@@ -76,9 +81,14 @@ Focus on:
|
|
76 |
Keep all factual content accurate - only improve presentation, structure, and completeness.
|
77 |
"""
|
78 |
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
43 |
If needs_refinement is True, provide specific, actionable requirements for improvement.
|
44 |
"""
|
45 |
|
46 |
+
def create_evaluator_agent(model: str = "gpt-4o-mini"):
|
47 |
+
"""Create an evaluator agent with configurable model"""
|
48 |
+
return Agent(
|
49 |
+
name="Research Evaluator",
|
50 |
+
instructions=EVALUATION_INSTRUCTIONS,
|
51 |
+
model=model,
|
52 |
+
output_type=EvaluationResult,
|
53 |
+
)
|
54 |
+
|
55 |
+
# Default evaluator agent for backward compatibility
|
56 |
+
evaluator_agent = create_evaluator_agent()
|
57 |
|
58 |
|
59 |
class OptimizedReport(BaseModel):
|
|
|
81 |
Keep all factual content accurate - only improve presentation, structure, and completeness.
|
82 |
"""
|
83 |
|
84 |
+
def create_optimizer_agent(model: str = "gpt-4o-mini"):
|
85 |
+
"""Create an optimizer agent with configurable model"""
|
86 |
+
return Agent(
|
87 |
+
name="Research Optimizer",
|
88 |
+
instructions=OPTIMIZER_INSTRUCTIONS,
|
89 |
+
model=model,
|
90 |
+
output_type=OptimizedReport,
|
91 |
+
)
|
92 |
+
|
93 |
+
# Default optimizer agent for backward compatibility
|
94 |
+
optimizer_agent = create_optimizer_agent()
|
planner_agent.py
CHANGED
@@ -20,9 +20,14 @@ class WebSearchPlan(BaseModel):
|
|
20 |
"""A list of web searches to perform to best answer the query."""
|
21 |
|
22 |
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
20 |
"""A list of web searches to perform to best answer the query."""
|
21 |
|
22 |
|
23 |
+
def create_planner_agent(model: str = "gpt-4o-mini"):
|
24 |
+
"""Create a planner agent with configurable model"""
|
25 |
+
return Agent(
|
26 |
+
name="PlannerAgent",
|
27 |
+
instructions=INSTRUCTIONS,
|
28 |
+
model=model,
|
29 |
+
output_type=WebSearchPlan,
|
30 |
+
)
|
31 |
+
|
32 |
+
# Default planner agent for backward compatibility
|
33 |
+
planner_agent = create_planner_agent()
|
research_manager.py
CHANGED
@@ -1,16 +1,26 @@
|
|
1 |
from agents import Runner, trace, gen_trace_id, Agent, function_tool
|
2 |
-
from search_agent import search_agent
|
3 |
-
from planner_agent import planner_agent, WebSearchItem, WebSearchPlan
|
4 |
-
from writer_agent import writer_agent, ReportData
|
5 |
-
from email_agent import email_agent
|
6 |
-
from clarifier_agent import clarifier_agent, ClarificationData
|
7 |
-
from evaluator_agent import evaluator_agent, optimizer_agent, EvaluationResult, OptimizedReport
|
8 |
import asyncio
|
9 |
from typing import Dict, Any, AsyncGenerator
|
10 |
|
11 |
# Legacy ResearchManager class for backward compatibility
|
12 |
class ResearchManager:
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
async def run_with_clarification(self, query: str):
|
15 |
""" Run the clarification step and return clarifying questions """
|
16 |
trace_id = gen_trace_id()
|
@@ -18,8 +28,9 @@ class ResearchManager:
|
|
18 |
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}")
|
19 |
print("Generating clarifying questions...")
|
20 |
|
|
|
21 |
result = await Runner.run(
|
22 |
-
|
23 |
f"Query: {query}",
|
24 |
)
|
25 |
|
@@ -83,12 +94,21 @@ Please use these clarifications to focus and refine the research approach."""
|
|
83 |
yield "Research complete"
|
84 |
yield result.final_output
|
85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
# Function tools for the manager agent to orchestrate the research process
|
87 |
@function_tool
|
88 |
async def plan_research(query: str) -> Dict[str, Any]:
|
89 |
""" Plan the research searches for a given query """
|
90 |
print("Planning searches...")
|
91 |
-
|
|
|
92 |
search_plan = result.final_output_as(WebSearchPlan)
|
93 |
print(f"Will perform {len(search_plan.searches)} searches")
|
94 |
return {
|
@@ -102,7 +122,8 @@ async def perform_search(search_query: str, reason: str) -> str:
|
|
102 |
print(f"Searching for: {search_query}")
|
103 |
input_text = f"Search term: {search_query}\nReason for searching: {reason}"
|
104 |
try:
|
105 |
-
|
|
|
106 |
return str(result.final_output)
|
107 |
except Exception as e:
|
108 |
print(f"Search failed for '{search_query}': {e}")
|
@@ -113,7 +134,8 @@ async def write_initial_report(query: str, search_results: str) -> Dict[str, Any
|
|
113 |
""" Generate an initial research report from search results """
|
114 |
print("Writing initial report...")
|
115 |
input_text = f"Original query: {query}\nSummarized search results: {search_results}"
|
116 |
-
|
|
|
117 |
report_data = result.final_output_as(ReportData)
|
118 |
print("Initial report completed")
|
119 |
return {
|
@@ -127,7 +149,8 @@ async def evaluate_report(query: str, report: str) -> Dict[str, Any]:
|
|
127 |
""" Evaluate the quality of a research report """
|
128 |
print("Evaluating report quality...")
|
129 |
input_text = f"Original Query: {query}\n\nReport to Evaluate:\n{report}"
|
130 |
-
|
|
|
131 |
evaluation = result.final_output_as(EvaluationResult)
|
132 |
print(f"Evaluation complete - Score: {evaluation.overall_score}/10, Needs refinement: {evaluation.needs_refinement}")
|
133 |
return {
|
@@ -153,7 +176,8 @@ Evaluation Feedback:
|
|
153 |
|
154 |
Please improve the report based on this feedback."""
|
155 |
|
156 |
-
|
|
|
157 |
optimized = result.final_output_as(OptimizedReport)
|
158 |
print("Report optimization complete")
|
159 |
return optimized.improved_markdown_report
|
@@ -304,9 +328,17 @@ Be methodical and ensure each step completes successfully before proceeding to t
|
|
304 |
"""
|
305 |
|
306 |
# Function to create custom research agent with email options
|
307 |
-
def create_custom_research_agent(email_address: str = None, send_email: bool = False):
|
308 |
"""Create a research manager agent with custom email settings"""
|
309 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
310 |
if send_email and email_address:
|
311 |
# Include email sending in tools
|
312 |
tools = [
|
@@ -379,7 +411,7 @@ The user has chosen NOT to receive the report via email.
|
|
379 |
name=f"Custom Research Manager Agent",
|
380 |
instructions=instructions,
|
381 |
tools=tools,
|
382 |
-
model=
|
383 |
handoff_description="Orchestrate comprehensive research with quality assurance and optional email delivery"
|
384 |
)
|
385 |
|
@@ -399,18 +431,28 @@ ResearchManagerAgent = Agent(
|
|
399 |
handoff_description="Orchestrate comprehensive research with quality assurance and optimization"
|
400 |
)
|
401 |
|
402 |
-
async def run_research_with_progress(query: str, email_address: str = None, send_email: bool = False) -> AsyncGenerator[str, None]:
|
403 |
"""Run research with step-by-step progress updates"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
404 |
trace_id = gen_trace_id()
|
405 |
|
406 |
-
yield f"๐ **Starting Enhanced Research**\n\n**Query:** {query}\n\n**Trace ID:** {trace_id}\n\n---\n\n"
|
407 |
|
408 |
try:
|
409 |
with trace("Enhanced Research with Progress", trace_id=trace_id):
|
410 |
# Step 1: Planning
|
411 |
yield "๐ **Step 1/6:** Planning research strategy...\n\n*Analyzing your query and determining the best search approach*"
|
412 |
|
413 |
-
|
|
|
414 |
search_plan = result.final_output_as(WebSearchPlan)
|
415 |
|
416 |
yield f"โ
**Planning Complete** - Will perform {len(search_plan.searches)} targeted searches\n\n---\n\n"
|
@@ -424,7 +466,8 @@ async def run_research_with_progress(query: str, email_address: str = None, send
|
|
424 |
|
425 |
try:
|
426 |
input_text = f"Search term: {search_item.query}\nReason for searching: {search_item.reason}"
|
427 |
-
|
|
|
428 |
search_results.append(str(result.final_output))
|
429 |
yield f"โ
**Search {i} Complete**\n\n"
|
430 |
except Exception as e:
|
@@ -438,7 +481,8 @@ async def run_research_with_progress(query: str, email_address: str = None, send
|
|
438 |
|
439 |
combined_results = "\n\n".join(search_results)
|
440 |
input_text = f"Original query: {query}\nSummarized search results: {combined_results}"
|
441 |
-
|
|
|
442 |
report_data = result.final_output_as(ReportData)
|
443 |
|
444 |
yield "โ
**Initial Report Complete**\n\n---\n\n"
|
@@ -447,7 +491,8 @@ async def run_research_with_progress(query: str, email_address: str = None, send
|
|
447 |
yield "๐ **Step 4/6:** Evaluating report quality...\n\n*AI quality assessment in progress*"
|
448 |
|
449 |
input_text = f"Original Query: {query}\n\nReport to Evaluate:\n{report_data.markdown_report}"
|
450 |
-
|
|
|
451 |
evaluation = result.final_output_as(EvaluationResult)
|
452 |
|
453 |
yield f"โ
**Evaluation Complete** - Score: {evaluation.overall_score}/10\n\n"
|
@@ -469,7 +514,8 @@ Evaluation Feedback:
|
|
469 |
|
470 |
Please improve the report based on this feedback."""
|
471 |
|
472 |
-
|
|
|
473 |
optimized = result.final_output_as(OptimizedReport)
|
474 |
final_report = optimized.improved_markdown_report
|
475 |
|
|
|
1 |
from agents import Runner, trace, gen_trace_id, Agent, function_tool
|
2 |
+
from search_agent import search_agent, create_search_agent
|
3 |
+
from planner_agent import planner_agent, create_planner_agent, WebSearchItem, WebSearchPlan
|
4 |
+
from writer_agent import writer_agent, create_writer_agent, ReportData
|
5 |
+
from email_agent import email_agent, create_email_agent
|
6 |
+
from clarifier_agent import clarifier_agent, create_clarifier_agent, ClarificationData
|
7 |
+
from evaluator_agent import evaluator_agent, optimizer_agent, create_evaluator_agent, create_optimizer_agent, EvaluationResult, OptimizedReport
|
8 |
import asyncio
|
9 |
from typing import Dict, Any, AsyncGenerator
|
10 |
|
11 |
# Legacy ResearchManager class for backward compatibility
|
12 |
class ResearchManager:
|
13 |
|
14 |
+
def __init__(self, api_key: str = None, model: str = "gpt-4o-mini"):
|
15 |
+
"""Initialize ResearchManager with optional API key and model"""
|
16 |
+
self.api_key = api_key
|
17 |
+
self.model = model
|
18 |
+
|
19 |
+
# Set the API key in environment if provided
|
20 |
+
if api_key:
|
21 |
+
import os
|
22 |
+
os.environ["OPENAI_API_KEY"] = api_key
|
23 |
+
|
24 |
async def run_with_clarification(self, query: str):
|
25 |
""" Run the clarification step and return clarifying questions """
|
26 |
trace_id = gen_trace_id()
|
|
|
28 |
print(f"View trace: https://platform.openai.com/traces/trace?trace_id={trace_id}")
|
29 |
print("Generating clarifying questions...")
|
30 |
|
31 |
+
clarifier = create_clarifier_agent(self.model)
|
32 |
result = await Runner.run(
|
33 |
+
clarifier,
|
34 |
f"Query: {query}",
|
35 |
)
|
36 |
|
|
|
94 |
yield "Research complete"
|
95 |
yield result.final_output
|
96 |
|
97 |
+
# Global variable to store current model
|
98 |
+
_current_model = "gpt-4o-mini"
|
99 |
+
|
100 |
+
def set_current_model(model: str):
|
101 |
+
"""Set the current model for function tools"""
|
102 |
+
global _current_model
|
103 |
+
_current_model = model
|
104 |
+
|
105 |
# Function tools for the manager agent to orchestrate the research process
|
106 |
@function_tool
|
107 |
async def plan_research(query: str) -> Dict[str, Any]:
|
108 |
""" Plan the research searches for a given query """
|
109 |
print("Planning searches...")
|
110 |
+
planner = create_planner_agent(_current_model)
|
111 |
+
result = await Runner.run(planner, f"Query: {query}")
|
112 |
search_plan = result.final_output_as(WebSearchPlan)
|
113 |
print(f"Will perform {len(search_plan.searches)} searches")
|
114 |
return {
|
|
|
122 |
print(f"Searching for: {search_query}")
|
123 |
input_text = f"Search term: {search_query}\nReason for searching: {reason}"
|
124 |
try:
|
125 |
+
searcher = create_search_agent(_current_model)
|
126 |
+
result = await Runner.run(searcher, input_text)
|
127 |
return str(result.final_output)
|
128 |
except Exception as e:
|
129 |
print(f"Search failed for '{search_query}': {e}")
|
|
|
134 |
""" Generate an initial research report from search results """
|
135 |
print("Writing initial report...")
|
136 |
input_text = f"Original query: {query}\nSummarized search results: {search_results}"
|
137 |
+
writer = create_writer_agent(_current_model)
|
138 |
+
result = await Runner.run(writer, input_text)
|
139 |
report_data = result.final_output_as(ReportData)
|
140 |
print("Initial report completed")
|
141 |
return {
|
|
|
149 |
""" Evaluate the quality of a research report """
|
150 |
print("Evaluating report quality...")
|
151 |
input_text = f"Original Query: {query}\n\nReport to Evaluate:\n{report}"
|
152 |
+
evaluator = create_evaluator_agent(_current_model)
|
153 |
+
result = await Runner.run(evaluator, input_text)
|
154 |
evaluation = result.final_output_as(EvaluationResult)
|
155 |
print(f"Evaluation complete - Score: {evaluation.overall_score}/10, Needs refinement: {evaluation.needs_refinement}")
|
156 |
return {
|
|
|
176 |
|
177 |
Please improve the report based on this feedback."""
|
178 |
|
179 |
+
optimizer = create_optimizer_agent(_current_model)
|
180 |
+
result = await Runner.run(optimizer, input_text)
|
181 |
optimized = result.final_output_as(OptimizedReport)
|
182 |
print("Report optimization complete")
|
183 |
return optimized.improved_markdown_report
|
|
|
328 |
"""
|
329 |
|
330 |
# Function to create custom research agent with email options
|
331 |
+
def create_custom_research_agent(email_address: str = None, send_email: bool = False, api_key: str = None, model: str = "gpt-4o-mini"):
|
332 |
"""Create a research manager agent with custom email settings"""
|
333 |
|
334 |
+
# Set API key in environment if provided
|
335 |
+
if api_key:
|
336 |
+
import os
|
337 |
+
os.environ["OPENAI_API_KEY"] = api_key
|
338 |
+
|
339 |
+
# Set the current model for all function tools
|
340 |
+
set_current_model(model)
|
341 |
+
|
342 |
if send_email and email_address:
|
343 |
# Include email sending in tools
|
344 |
tools = [
|
|
|
411 |
name=f"Custom Research Manager Agent",
|
412 |
instructions=instructions,
|
413 |
tools=tools,
|
414 |
+
model=model,
|
415 |
handoff_description="Orchestrate comprehensive research with quality assurance and optional email delivery"
|
416 |
)
|
417 |
|
|
|
431 |
handoff_description="Orchestrate comprehensive research with quality assurance and optimization"
|
432 |
)
|
433 |
|
434 |
+
async def run_research_with_progress(query: str, email_address: str = None, send_email: bool = False, api_key: str = None, model: str = "gpt-4o-mini") -> AsyncGenerator[str, None]:
|
435 |
"""Run research with step-by-step progress updates"""
|
436 |
+
|
437 |
+
# Set API key if provided
|
438 |
+
if api_key:
|
439 |
+
import os
|
440 |
+
os.environ["OPENAI_API_KEY"] = api_key
|
441 |
+
|
442 |
+
# Set current model for function tools
|
443 |
+
set_current_model(model)
|
444 |
+
|
445 |
trace_id = gen_trace_id()
|
446 |
|
447 |
+
yield f"๐ **Starting Enhanced Research**\n\n**Query:** {query}\n\n**Trace ID:** {trace_id}\n\n**Model:** {model}\n\n---\n\n"
|
448 |
|
449 |
try:
|
450 |
with trace("Enhanced Research with Progress", trace_id=trace_id):
|
451 |
# Step 1: Planning
|
452 |
yield "๐ **Step 1/6:** Planning research strategy...\n\n*Analyzing your query and determining the best search approach*"
|
453 |
|
454 |
+
planner = create_planner_agent(model)
|
455 |
+
result = await Runner.run(planner, f"Query: {query}")
|
456 |
search_plan = result.final_output_as(WebSearchPlan)
|
457 |
|
458 |
yield f"โ
**Planning Complete** - Will perform {len(search_plan.searches)} targeted searches\n\n---\n\n"
|
|
|
466 |
|
467 |
try:
|
468 |
input_text = f"Search term: {search_item.query}\nReason for searching: {search_item.reason}"
|
469 |
+
searcher = create_search_agent(model)
|
470 |
+
result = await Runner.run(searcher, input_text)
|
471 |
search_results.append(str(result.final_output))
|
472 |
yield f"โ
**Search {i} Complete**\n\n"
|
473 |
except Exception as e:
|
|
|
481 |
|
482 |
combined_results = "\n\n".join(search_results)
|
483 |
input_text = f"Original query: {query}\nSummarized search results: {combined_results}"
|
484 |
+
writer = create_writer_agent(model)
|
485 |
+
result = await Runner.run(writer, input_text)
|
486 |
report_data = result.final_output_as(ReportData)
|
487 |
|
488 |
yield "โ
**Initial Report Complete**\n\n---\n\n"
|
|
|
491 |
yield "๐ **Step 4/6:** Evaluating report quality...\n\n*AI quality assessment in progress*"
|
492 |
|
493 |
input_text = f"Original Query: {query}\n\nReport to Evaluate:\n{report_data.markdown_report}"
|
494 |
+
evaluator = create_evaluator_agent(model)
|
495 |
+
result = await Runner.run(evaluator, input_text)
|
496 |
evaluation = result.final_output_as(EvaluationResult)
|
497 |
|
498 |
yield f"โ
**Evaluation Complete** - Score: {evaluation.overall_score}/10\n\n"
|
|
|
514 |
|
515 |
Please improve the report based on this feedback."""
|
516 |
|
517 |
+
optimizer = create_optimizer_agent(model)
|
518 |
+
result = await Runner.run(optimizer, input_text)
|
519 |
optimized = result.final_output_as(OptimizedReport)
|
520 |
final_report = optimized.improved_markdown_report
|
521 |
|
search_agent.py
CHANGED
@@ -14,10 +14,15 @@ INSTRUCTIONS = (
|
|
14 |
"Do not include any additional commentary other than the summary itself with preserved source links."
|
15 |
)
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
14 |
"Do not include any additional commentary other than the summary itself with preserved source links."
|
15 |
)
|
16 |
|
17 |
+
def create_search_agent(model: str = "gpt-4o-mini"):
|
18 |
+
"""Create a search agent with configurable model"""
|
19 |
+
return Agent(
|
20 |
+
name="Search agent",
|
21 |
+
instructions=INSTRUCTIONS,
|
22 |
+
tools=[WebSearchTool(search_context_size="low")],
|
23 |
+
model=model,
|
24 |
+
model_settings=ModelSettings(tool_choice="required"),
|
25 |
+
)
|
26 |
+
|
27 |
+
# Default search agent for backward compatibility
|
28 |
+
search_agent = create_search_agent()
|
writer_agent.py
CHANGED
@@ -31,9 +31,14 @@ class ReportData(BaseModel):
|
|
31 |
"""Suggested topics to research further"""
|
32 |
|
33 |
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
31 |
"""Suggested topics to research further"""
|
32 |
|
33 |
|
34 |
+
def create_writer_agent(model: str = "gpt-4o-mini"):
|
35 |
+
"""Create a writer agent with configurable model"""
|
36 |
+
return Agent(
|
37 |
+
name="WriterAgent",
|
38 |
+
instructions=INSTRUCTIONS,
|
39 |
+
model=model,
|
40 |
+
output_type=ReportData,
|
41 |
+
)
|
42 |
+
|
43 |
+
# Default writer agent for backward compatibility
|
44 |
+
writer_agent = create_writer_agent()
|