suh4s commited on
Commit
4106305
·
1 Parent(s): 911769b

InsightFlow AI first spaces commit

Browse files
.env.example ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # InsightFlow AI Environment Variables
2
+ # OpenAI API key
3
+ # Get from: https://platform.openai.com/api-keys
4
+ OPENAI_API_KEY=your_openai_api_key_here
5
+
6
+ # Tavily API key
7
+ # Get from: https://tavily.com/
8
+ TAVILY_API_KEY=your_tavily_api_key_here
Dockerfile ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Set environment variables
4
+ ENV PYTHONDONTWRITEBYTECODE=1 \
5
+ PYTHONUNBUFFERED=1 \
6
+ PYTHONFAULTHANDLER=1 \
7
+ PATH="/home/user/.local/bin:$PATH" \
8
+ HOME=/home/user \
9
+ PYTHONPATH="/home/user/app:$PYTHONPATH"
10
+
11
+ # Add non-root user
12
+ RUN useradd -m -u 1000 user
13
+
14
+ # Install system dependencies
15
+ RUN apt-get update && apt-get install -y --no-install-recommends \
16
+ build-essential \
17
+ curl \
18
+ && rm -rf /var/lib/apt/lists/*
19
+
20
+ # Set working directory
21
+ WORKDIR $HOME/app
22
+
23
+ # Copy requirements from pyproject.toml
24
+ COPY --chown=user pyproject.toml .
25
+ COPY --chown=user setup.cfg .
26
+
27
+ # Install pip and dependencies
28
+ RUN pip install --upgrade pip setuptools wheel
29
+ RUN pip install -e .
30
+
31
+ # Copy application files
32
+ COPY --chown=user app.py .
33
+ COPY --chown=user insight_state.py .
34
+ COPY --chown=user chainlit.md .
35
+ COPY --chown=user README.md .
36
+ COPY --chown=user utils ./utils
37
+ COPY --chown=user persona_configs ./persona_configs
38
+ COPY --chown=user download_data.py .
39
+ COPY --chown=user .env.example .
40
+
41
+ # Create necessary directories
42
+ RUN mkdir -p data data_sources exports public
43
+ RUN mkdir -p exports && touch exports/.gitkeep
44
+ RUN mkdir -p data && touch data/.gitkeep
45
+
46
+ # Set permissions
47
+ RUN chown -R user:user $HOME
48
+
49
+ # Switch to non-root user
50
+ USER user
51
+
52
+ # Run data download script to initialize data sources
53
+ RUN python download_data.py
54
+
55
+ # Install the dependencies
56
+ # RUN uv sync --frozen
57
+ RUN uv sync
58
+
59
+ # Create config for HF Spaces
60
+ RUN echo "sdk_version: 3\ntitle: InsightFlow AI\ndescription: Multi-perspective research assistant with visualization capabilities\napp_port: 7860" > $HOME/app/.hf/settings.yaml
61
+
62
+ # Expose Hugging Face Spaces port
63
+ EXPOSE 7860
64
+
65
+ # Run the app
66
+ CMD ["uv", "run", "chainlit", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,2 +1,97 @@
1
- # AIE6_Cert_Challenge_Suhas
2
- AIE6_Cert_Challenge_Suhas
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: InsightFlow AI
3
+ emoji: 🧠
4
+ colorFrom: blue
5
+ colorTo: indigo
6
+ sdk: docker
7
+ pinned: true
8
+ app_port: 7860
9
+ short_description: Multi-perspective research assistant with visualization capabilities
10
+ ---
11
+
12
+ # InsightFlow AI: Multi-Perspective Research Assistant
13
+
14
+ InsightFlow AI is an advanced research assistant that analyzes topics from multiple perspectives, providing a comprehensive and nuanced understanding of complex subjects.
15
+
16
+ ![InsightFlow AI](https://huggingface.co/datasets/suhas/InsightFlow-AI-demo/resolve/main/insightflow_banner.png)
17
+
18
+ ## Features
19
+
20
+ ### Multiple Perspective Analysis
21
+ - **Analytical**: Logical examination with methodical connections and patterns
22
+ - **Scientific**: Evidence-based reasoning grounded in empirical data
23
+ - **Philosophical**: Holistic exploration of deeper meaning and implications
24
+ - **Factual**: Straightforward presentation of verified information
25
+ - **Metaphorical**: Creative explanations through vivid analogies
26
+ - **Futuristic**: Forward-looking exploration of potential developments
27
+
28
+ ### Personality Perspectives
29
+ - **Sherlock Holmes**: Deductive reasoning with detailed observation
30
+ - **Richard Feynman**: First-principles physics with clear explanations
31
+ - **Hannah Fry**: Math-meets-society storytelling with practical examples
32
+
33
+ ### Visualization Capabilities
34
+ - **Concept Maps**: Automatically generated Mermaid diagrams showing relationships
35
+ - **Visual Notes**: DALL-E generated hand-drawn style visualizations of key insights
36
+ - **Visual-Only Mode**: Option to focus on visual representations for faster comprehension
37
+
38
+ ### Export Options
39
+ - **Markdown Export**: Save analyses as formatted markdown with embedded visualizations
40
+ - **PDF Export**: Generate professionally formatted PDF documents
41
+
42
+ ## How to Use
43
+
44
+ 1. **Select Personas**: Use the `/add [persona_name]` command to build your research team
45
+ 2. **Ask Your Question**: Type any research question or topic to analyze
46
+ 3. **Review Insights**: Explore the synthesized view and individual perspectives
47
+ 4. **Export Results**: Use `/export_md` or `/export_pdf` to save your analysis
48
+
49
+ ## Commands
50
+
51
+ ```
52
+ # Persona Management
53
+ /add [persona_name] - Add a perspective to your research team
54
+ /remove [persona_name] - Remove a perspective from your team
55
+ /list - Show all available perspectives
56
+ /team - Show your current team and settings
57
+
58
+ # Visualization Options
59
+ /visualization on|off - Toggle visualizations (Mermaid & DALL-E)
60
+ /visual_only on|off - Show only visualizations without text
61
+
62
+ # Export Options
63
+ /export_md - Export to markdown file
64
+ /export_pdf - Export to PDF file
65
+
66
+ # Mode Options
67
+ /direct on|off - Toggle direct LLM mode (bypasses multi-persona)
68
+ /perspectives on|off - Toggle showing individual perspectives
69
+ ```
70
+
71
+ ## Example Topics
72
+
73
+ - Historical events from multiple perspectives
74
+ - Scientific concepts with philosophical implications
75
+ - Societal issues that benefit from diverse viewpoints
76
+ - Future trends analyzed from different angles
77
+ - Complex problems requiring multi-faceted analysis
78
+
79
+ ## Technical Details
80
+
81
+ Built with Python using:
82
+ - LangGraph for orchestration
83
+ - OpenAI APIs for reasoning and visualization
84
+ - Chainlit for the user interface
85
+ - Custom persona system for perspective management
86
+
87
+ ## Try These Examples
88
+
89
+ - "The impact of artificial intelligence on society"
90
+ - "Climate change adaptation strategies"
91
+ - "Consciousness and its relationship to the brain"
92
+ - "The future of work in the next 20 years"
93
+ - "Ancient Greek philosophy and its relevance today"
94
+
95
+ ## Feedback and Support
96
+
97
+ For questions, feedback, or support, please open an issue on the [GitHub repository](https://github.com/suhas/InsightFlow-AI) or comment on this Space.
app.py ADDED
@@ -0,0 +1,1109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import TypedDict, List, Dict, Optional, Any
2
+ from typing_extensions import List, TypedDict
3
+
4
+ from dotenv import load_dotenv
5
+ import chainlit as cl
6
+ import os
7
+ import asyncio
8
+ import base64
9
+ import requests
10
+ import time
11
+ import datetime
12
+ import random
13
+ import string
14
+ import fpdf
15
+ from pathlib import Path
16
+
17
+ # Re-enable the Tavily search tool
18
+ from langchain_community.tools.tavily_search import TavilySearchResults
19
+ from langchain_core.documents import Document
20
+ from langchain_core.messages import BaseMessage, HumanMessage, SystemMessage, AIMessage
21
+ from langchain_openai import ChatOpenAI
22
+ # from langchain_core.language_models import FakeListLLM # Add FakeListLLM for testing
23
+ from langgraph.graph import StateGraph, END
24
+ from openai import OpenAI, AsyncOpenAI
25
+
26
+ # Import InsightFlow components
27
+ from insight_state import InsightFlowState
28
+ from utils.persona import PersonaFactory, PersonaReasoning
29
+
30
+ # Load environment variables
31
+ load_dotenv()
32
+
33
+ # Initialize OpenAI client for DALL-E
34
+ openai_client = AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
35
+
36
+ # --- INITIALIZE CORE COMPONENTS ---
37
+
38
+ # Re-enable search tool initialization
39
+ tavily_tool = TavilySearchResults(max_results=3)
40
+
41
+ # Initialize LLMs with optimized settings for speed
42
+ llm_planner = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.1, request_timeout=20)
43
+ llm_analytical = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.2, request_timeout=20)
44
+ llm_scientific = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.3, request_timeout=20)
45
+ llm_philosophical = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.4, request_timeout=20)
46
+ llm_factual = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.3, request_timeout=20)
47
+ llm_metaphorical = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.6, request_timeout=20)
48
+ llm_futuristic = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5, request_timeout=20)
49
+ llm_synthesizer = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.2, request_timeout=20)
50
+
51
+ # Direct mode LLM with slightly higher quality
52
+ llm_direct = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.3, request_timeout=25)
53
+
54
+ # --- SYSTEM PROMPTS ---
55
+
56
+ PLANNER_SYSPROMPT = """You are an expert planner agent that coordinates research across multiple personas.
57
+ Given a user query, your task is to create a research plan with specific sub-tasks for each selected persona.
58
+ Break down complex queries into specific tasks that leverage each persona's unique perspective.
59
+ """
60
+
61
+ SYNTHESIZER_SYSPROMPT = """You are a synthesis expert that combines multiple perspectives into a coherent response.
62
+ Given different persona perspectives on the same query, create a unified response that:
63
+ 1. Highlights unique insights from each perspective
64
+ 2. Notes areas of agreement and divergence
65
+ 3. Organizes information logically for the user
66
+ Present the final response in a cohesive format that integrates all perspectives.
67
+ """
68
+
69
+ DIRECT_SYSPROMPT = """You are a highly intelligent AI assistant that provides clear, direct, and helpful answers.
70
+ Your responses should be accurate, concise, and well-reasoned.
71
+ """
72
+
73
+ # --- LANGGRAPH NODES FOR INSIGHTFLOW AI ---
74
+
75
+ async def run_planner_agent(state: InsightFlowState) -> InsightFlowState:
76
+ """Plan the research approach for multiple personas"""
77
+ query = state["query"]
78
+ selected_personas = state["selected_personas"]
79
+
80
+ # For the MVP implementation, we'll use a simplified planning approach
81
+ # that just assigns the same query to each selected persona
82
+ # In a full implementation, the planner would create custom tasks for each persona
83
+
84
+ print(f"Planning research for query: {query}")
85
+ print(f"Selected personas: {selected_personas}")
86
+
87
+ state["current_step_name"] = "execute_persona_tasks"
88
+ return state
89
+
90
+ async def execute_persona_tasks(state: InsightFlowState) -> InsightFlowState:
91
+ """Execute tasks for each selected persona"""
92
+ query = state["query"]
93
+ selected_personas = state["selected_personas"]
94
+ persona_factory = cl.user_session.get("persona_factory")
95
+
96
+ # Initialize responses dict if not exists
97
+ if "persona_responses" not in state:
98
+ state["persona_responses"] = {}
99
+
100
+ print(f"Executing persona tasks for {len(selected_personas)} personas")
101
+
102
+ # Get progress message if it exists
103
+ progress_msg = cl.user_session.get("progress_msg")
104
+ total_personas = len(selected_personas)
105
+
106
+ # Process each persona with timeout safety
107
+ # Using asyncio.gather to run multiple persona tasks in parallel for speed
108
+ persona_tasks = []
109
+
110
+ # First, create all personas and tasks
111
+ for persona_id in selected_personas:
112
+ persona = persona_factory.create_persona(persona_id)
113
+ if persona:
114
+ # Add progress message for user feedback
115
+ await cl.Message(content=f"Generating insights from {persona_id} perspective...").send()
116
+ # Create task to run in parallel
117
+ task = generate_perspective_with_timeout(persona, query)
118
+ persona_tasks.append((persona_id, task))
119
+
120
+ # Run all perspective generations in parallel
121
+ completed = 0
122
+ for persona_id, task in persona_tasks:
123
+ try:
124
+ # Update dynamic progress if progress message exists
125
+ if progress_msg:
126
+ percent_done = 40 + int((completed / total_personas) * 40)
127
+ await update_message(
128
+ progress_msg,
129
+ f"⏳ Generating perspective from {persona_id} ({percent_done}%)..."
130
+ )
131
+
132
+ response = await task
133
+ state["persona_responses"][persona_id] = response
134
+ print(f"Perspective generated for {persona_id}")
135
+
136
+ # Increment completed count
137
+ completed += 1
138
+
139
+ except Exception as e:
140
+ print(f"Error getting {persona_id} perspective: {e}")
141
+ state["persona_responses"][persona_id] = f"Could not generate perspective: {str(e)}"
142
+
143
+ # Still increment completed count
144
+ completed += 1
145
+
146
+ state["current_step_name"] = "synthesize_responses"
147
+ return state
148
+
149
+ async def generate_perspective_with_timeout(persona, query):
150
+ """Generate a perspective with timeout handling"""
151
+ try:
152
+ # Set a timeout for each perspective generation
153
+ response = await asyncio.wait_for(
154
+ cl.make_async(persona.generate_perspective)(query),
155
+ timeout=30 # 30-second timeout (reduced for speed)
156
+ )
157
+ return response
158
+ except asyncio.TimeoutError:
159
+ # Handle timeout by providing a simplified response
160
+ return f"The perspective generation timed out. This may be due to high API traffic or complexity of the query."
161
+ except Exception as e:
162
+ # Handle other errors
163
+ return f"Error generating perspective: {str(e)}"
164
+
165
+ async def synthesize_responses(state: InsightFlowState) -> InsightFlowState:
166
+ """Combine perspectives from different personas"""
167
+ query = state["query"]
168
+ persona_responses = state["persona_responses"]
169
+
170
+ if not persona_responses:
171
+ state["synthesized_response"] = "No persona perspectives were generated."
172
+ state["current_step_name"] = "present_results"
173
+ return state
174
+
175
+ print(f"Synthesizing responses from {len(persona_responses)} personas")
176
+
177
+ # Add progress message for user feedback
178
+ await cl.Message(content="Synthesizing insights from all perspectives...").send()
179
+
180
+ # Prepare input for synthesizer
181
+ perspectives_text = ""
182
+ for persona_id, response in persona_responses.items():
183
+ perspectives_text += f"\n\n{persona_id.capitalize()} Perspective:\n{response}"
184
+
185
+ # Use LLM to synthesize with timeout
186
+ messages = [
187
+ SystemMessage(content=SYNTHESIZER_SYSPROMPT),
188
+ HumanMessage(content=f"Query: {query}\n\nPerspectives:{perspectives_text}\n\nPlease synthesize these perspectives into a coherent response.")
189
+ ]
190
+
191
+ try:
192
+ # Set a timeout for the synthesis
193
+ synthesizer_response = await asyncio.wait_for(
194
+ llm_synthesizer.ainvoke(messages),
195
+ timeout=30 # 30-second timeout (reduced for speed)
196
+ )
197
+ state["synthesized_response"] = synthesizer_response.content
198
+ print("Synthesis complete")
199
+ except asyncio.TimeoutError:
200
+ # Handle timeout for synthesis
201
+ state["synthesized_response"] = "The synthesis of perspectives timed out. Here are the individual perspectives instead."
202
+ print("Synthesis timed out")
203
+ except Exception as e:
204
+ print(f"Error synthesizing perspectives: {e}")
205
+ state["synthesized_response"] = f"Error synthesizing perspectives: {str(e)}"
206
+
207
+ state["current_step_name"] = "generate_visualization"
208
+ return state
209
+
210
+ async def generate_dalle_image(prompt: str) -> Optional[str]:
211
+ """Generate a DALL-E image and return the URL"""
212
+ try:
213
+ # Create a detailed prompt for hand-drawn style visualization
214
+ full_prompt = f"Create a hand-drawn style visual note or sketch that represents: {prompt}. Make it look like a thoughtful drawing with annotations and key concepts highlighted. Include multiple perspectives connected together in a coherent visualization. Style: thoughtful hand-drawn sketch, notebook style with labels."
215
+
216
+ # Call DALL-E to generate the image
217
+ response = await openai_client.images.generate(
218
+ model="dall-e-3",
219
+ prompt=full_prompt,
220
+ size="1024x1024",
221
+ quality="standard",
222
+ n=1
223
+ )
224
+
225
+ # Return the URL of the generated image
226
+ return response.data[0].url
227
+ except Exception as e:
228
+ print(f"DALL-E image generation failed: {e}")
229
+ return None
230
+
231
+ async def generate_visualization(state: InsightFlowState) -> InsightFlowState:
232
+ """Generate a Mermaid diagram from the multiple perspectives"""
233
+ # Get progress message if available and update it
234
+ progress_msg = cl.user_session.get("progress_msg")
235
+ if progress_msg:
236
+ await update_message(progress_msg, "⏳ Generating visual representation (90%)...")
237
+
238
+ # Skip if no synthesized response or no personas
239
+ if not state.get("synthesized_response") or not state.get("persona_responses"):
240
+ state["current_step_name"] = "present_results"
241
+ return state
242
+
243
+ # Get visualization settings
244
+ show_visualization = cl.user_session.get("show_visualization", True)
245
+ visual_only_mode = cl.user_session.get("visual_only_mode", False)
246
+
247
+ # Determine if we should generate visualizations (either mode is on)
248
+ should_visualize = show_visualization or visual_only_mode
249
+
250
+ # Generate mermaid diagram if visualizations are enabled
251
+ if should_visualize:
252
+ try:
253
+ # Create the absolute simplest Mermaid diagram possible
254
+ query = state.get("query", "Query")
255
+ query_short = query[:20] + "..." if len(query) > 20 else query
256
+
257
+ # Generate the most basic diagram structure
258
+ mermaid_text = f"""graph TD
259
+ Q["{query_short}"]
260
+ S["Synthesized View"]"""
261
+
262
+ # Add each persona with a simple connection
263
+ for i, persona in enumerate(state.get("persona_responses", {}).keys()):
264
+ persona_short = persona.capitalize()
265
+ node_id = f"P{i+1}"
266
+ mermaid_text += f"""
267
+ {node_id}["{persona_short}"]
268
+ Q --> {node_id}
269
+ {node_id} --> S"""
270
+
271
+ # Store the simplified mermaid code
272
+ state["visualization_code"] = mermaid_text
273
+ print("Visualization generation complete with simplified diagram")
274
+
275
+ except Exception as e:
276
+ print(f"Error generating visualization: {e}")
277
+ state["visualization_code"] = None
278
+
279
+ # Generate DALL-E image if visualizations are enabled
280
+ try:
281
+ # Update progress message
282
+ if progress_msg:
283
+ await update_message(progress_msg, "⏳ Generating hand-drawn visualization (92%)...")
284
+
285
+ # Create a prompt from the synthesized response
286
+ image_prompt = state.get("synthesized_response", "")
287
+ if len(image_prompt) > 500:
288
+ image_prompt = image_prompt[:500] # Limit prompt length
289
+
290
+ # Add the query for context
291
+ image_prompt = f"Query: {state.get('query', '')}\n\nSynthesis: {image_prompt}"
292
+
293
+ # Generate the image
294
+ image_url = await generate_dalle_image(image_prompt)
295
+ state["visualization_image_url"] = image_url
296
+ print("DALL-E visualization generated successfully")
297
+ except Exception as e:
298
+ print(f"Error generating DALL-E image: {e}")
299
+ state["visualization_image_url"] = None
300
+
301
+ state["current_step_name"] = "present_results"
302
+ return state
303
+
304
+ async def present_results(state: InsightFlowState) -> InsightFlowState:
305
+ """Present the final results to the user"""
306
+ synthesized_response = state.get("synthesized_response", "No synthesized response available.")
307
+
308
+ print("Presenting results to user")
309
+
310
+ # Ensure progress is at 100% before showing results
311
+ progress_msg = cl.user_session.get("progress_msg")
312
+ if progress_msg:
313
+ await update_message(progress_msg, "✅ Process complete (100%)")
314
+
315
+ # Get visualization settings
316
+ visual_only_mode = cl.user_session.get("visual_only_mode", False)
317
+ show_visualization = cl.user_session.get("show_visualization", True)
318
+
319
+ # Check if either visualization mode is enabled
320
+ visualization_enabled = visual_only_mode or show_visualization
321
+
322
+ # Determine panel mode
323
+ panel_mode = "Research Assistant" if state["panel_type"] == "research" else "Multi-Persona Discussion"
324
+
325
+ # Check if we have visualizations available
326
+ has_mermaid = state.get("visualization_code") is not None
327
+ has_dalle_image = state.get("visualization_image_url") is not None
328
+ has_any_visualization = has_mermaid or has_dalle_image
329
+
330
+ # Send text response if we're not in visual-only mode OR if no visualizations are available
331
+ if not visual_only_mode or (visual_only_mode and not has_any_visualization):
332
+ panel_indicator = f"**{panel_mode} Insights:**\n\n"
333
+ # In visual-only mode with no visualizations, add an explanation
334
+ if visual_only_mode and not has_any_visualization:
335
+ panel_indicator = f"**{panel_mode} Insights (No visualizations available):**\n\n"
336
+ await cl.Message(content=panel_indicator + synthesized_response).send()
337
+
338
+ # Display DALL-E generated image if available and visualizations are enabled
339
+ if has_dalle_image and visualization_enabled:
340
+ try:
341
+ # Add a title for the image
342
+ if visual_only_mode:
343
+ image_title = f"**Hand-drawn Visualization of {panel_mode} Insights:**"
344
+ else:
345
+ image_title = "**Hand-drawn Visualization:**"
346
+
347
+ # Send the title
348
+ await cl.Message(content=image_title).send()
349
+
350
+ # Send the image URL as markdown
351
+ image_url = state["visualization_image_url"]
352
+ image_markdown = f"![DALL-E Visualization]({image_url})"
353
+ await cl.Message(content=image_markdown).send()
354
+
355
+ except Exception as e:
356
+ print(f"Error displaying DALL-E image: {e}")
357
+ # If in visual-only mode and image fails but we have no other visualization or text shown
358
+ if visual_only_mode and not has_mermaid and state.get("text_fallback_shown", False) is not True:
359
+ panel_indicator = f"**{panel_mode} Insights (Image generation failed):**\n\n"
360
+ await cl.Message(content=panel_indicator + synthesized_response).send()
361
+ state["text_fallback_shown"] = True
362
+
363
+ # Display Mermaid diagram if available and visualizations are enabled
364
+ if has_mermaid and visualization_enabled:
365
+ try:
366
+ # Add a brief summary in visual-only mode
367
+ if visual_only_mode:
368
+ diagram_title = f"**Concept Map of {panel_mode} Insights:**"
369
+ else:
370
+ diagram_title = "**Concept Map:**"
371
+
372
+ # First send a title message
373
+ await cl.Message(content=diagram_title).send()
374
+
375
+ # Try to render the mermaid diagram
376
+ try:
377
+ # Ensure the diagram is extremely simple and valid
378
+ mermaid_code = state['visualization_code']
379
+
380
+ # Fallback to a guaranteed working diagram if rendering fails
381
+ if not mermaid_code or len(mermaid_code) < 10:
382
+ mermaid_code = """graph TD
383
+ A[Query] --> B[Analysis]
384
+ B --> C[Result]"""
385
+
386
+ # Create the mermaid block with proper syntax
387
+ # Each line needs to be separate without extra indentation
388
+ mermaid_block = "```mermaid\n"
389
+ for line in mermaid_code.split('\n'):
390
+ mermaid_block += line.strip() + "\n"
391
+ mermaid_block += "```"
392
+
393
+ # Send the diagram as its own message
394
+ await cl.Message(content=mermaid_block).send()
395
+ except Exception as diagram_err:
396
+ print(f"Error rendering diagram: {diagram_err}")
397
+ # Try an ultra-simple fallback diagram
398
+ ultra_simple = """```mermaid
399
+ graph TD
400
+ A[Start] --> B[End]
401
+ ```"""
402
+ await cl.Message(content=ultra_simple).send()
403
+
404
+ # Send the footer only if we have visualizations
405
+ if has_any_visualization:
406
+ await cl.Message(content="_Visualizations represent the key relationships between concepts from different perspectives._").send()
407
+
408
+ except Exception as e:
409
+ print(f"Error displaying visualization: {e}")
410
+ # If in visual-only mode and visualization fails but no image shown yet and no text shown yet
411
+ if visual_only_mode and not has_dalle_image and state.get("text_fallback_shown", False) is not True:
412
+ panel_indicator = f"**{panel_mode} Insights (Visualization failed):**\n\n"
413
+ await cl.Message(content=panel_indicator + synthesized_response).send()
414
+ # Mark that we showed the fallback text to avoid duplicates
415
+ state["text_fallback_shown"] = True
416
+
417
+ # Check if user wants to see individual perspectives (not in visual-only mode)
418
+ if cl.user_session.get("show_perspectives", True) and not visual_only_mode:
419
+ # Show individual perspectives as separate messages instead of expandable elements
420
+ for persona_id, response in state["persona_responses"].items():
421
+ persona_name = persona_id.capitalize()
422
+
423
+ # Get proper display name from config if available
424
+ persona_factory = cl.user_session.get("persona_factory")
425
+ if persona_factory:
426
+ config = persona_factory.get_config(persona_id)
427
+ if config and "name" in config:
428
+ persona_name = config["name"]
429
+
430
+ # Just send the perspective as a message with a header
431
+ perspective_message = f"**{persona_name}'s Perspective:**\n\n{response}"
432
+ await cl.Message(content=perspective_message).send()
433
+
434
+ state["current_step_name"] = "END"
435
+ return state
436
+
437
+ # --- LANGGRAPH SETUP FOR INSIGHTFLOW AI ---
438
+ # Now define the graph with the functions we've defined above
439
+ insight_graph_builder = StateGraph(InsightFlowState)
440
+
441
+ # Add all nodes
442
+ insight_graph_builder.add_node("planner_agent", run_planner_agent)
443
+ insight_graph_builder.add_node("execute_persona_tasks", execute_persona_tasks)
444
+ insight_graph_builder.add_node("synthesize_responses", synthesize_responses)
445
+ insight_graph_builder.add_node("generate_visualization", generate_visualization)
446
+ insight_graph_builder.add_node("present_results", present_results)
447
+
448
+ # Add edges
449
+ insight_graph_builder.add_edge("planner_agent", "execute_persona_tasks")
450
+ insight_graph_builder.add_edge("execute_persona_tasks", "synthesize_responses")
451
+ insight_graph_builder.add_edge("synthesize_responses", "generate_visualization")
452
+ insight_graph_builder.add_edge("generate_visualization", "present_results")
453
+ insight_graph_builder.add_edge("present_results", END)
454
+
455
+ # Set entry point
456
+ insight_graph_builder.set_entry_point("planner_agent")
457
+
458
+ # Compile the graph
459
+ insight_flow_graph = insight_graph_builder.compile()
460
+ print("InsightFlow graph compiled successfully")
461
+
462
+ # --- DIRECT QUERY FUNCTION ---
463
+ async def direct_query(query: str):
464
+ """Process a direct query without using multiple personas"""
465
+ messages = [
466
+ SystemMessage(content=DIRECT_SYSPROMPT),
467
+ HumanMessage(content=query)
468
+ ]
469
+
470
+ try:
471
+ # Direct query to LLM with streaming
472
+ async for chunk in llm_direct.astream(messages):
473
+ if chunk.content:
474
+ # Yield chunk for streaming UI updates
475
+ yield chunk.content
476
+ except Exception as e:
477
+ error_msg = f"Error processing direct query: {str(e)}"
478
+ yield error_msg
479
+
480
+ # Helper function to display help information
481
+ async def display_help():
482
+ """Display all available commands"""
483
+ help_text = """
484
+ # InsightFlow AI Commands
485
+
486
+ **Persona Management:**
487
+ - `/add persona_name` - Add a persona to your research team (e.g., `/add factual`)
488
+ - `/remove persona_name` - Remove a persona from your team (e.g., `/remove philosophical`)
489
+ - `/list` - Show all available personas
490
+ - `/team` - Show your current team and settings
491
+
492
+ **Speed and Mode Options:**
493
+ - `/direct on|off` - Toggle direct LLM mode (bypasses multi-persona system)
494
+ - `/quick on|off` - Toggle quick mode (uses fewer personas)
495
+ - `/perspectives on|off` - Toggle showing individual perspectives
496
+ - `/visualization on|off` - Toggle showing visualizations (Mermaid diagrams & DALL-E images)
497
+ - `/visual_only on|off` - Show only visualizations without text (faster)
498
+
499
+ **Export Options:**
500
+ - `/export_md` - Export the current insight analysis to a markdown file
501
+ - `/export_pdf` - Export the current insight analysis to a PDF file
502
+
503
+ **System Commands:**
504
+ - `/help` - Show this help message
505
+
506
+ **Available Personas:**
507
+ - analytical - Logical problem-solving
508
+ - scientific - Evidence-based reasoning
509
+ - philosophical - Meaning and implications
510
+ - factual - Practical information
511
+ - metaphorical - Creative analogies
512
+ - futuristic - Forward-looking possibilities
513
+ """
514
+ await cl.Message(content=help_text).send()
515
+
516
+ # Export functions
517
+ async def generate_random_id(length=8):
518
+ """Generate a random ID for export filenames"""
519
+ return ''.join(random.choices(string.ascii_lowercase + string.digits, k=length))
520
+
521
+ async def export_to_markdown(state: InsightFlowState):
522
+ """Export the current insight analysis to a markdown file"""
523
+ if not state.get("synthesized_response"):
524
+ return None, "No analysis available to export. Please run a query first."
525
+
526
+ # Create exports directory if it doesn't exist
527
+ Path("./exports").mkdir(exist_ok=True)
528
+
529
+ # Generate a unique filename with timestamp
530
+ timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
531
+ random_id = await generate_random_id()
532
+ filename = f"exports/insightflow_analysis_{timestamp}_{random_id}.md"
533
+
534
+ # Prepare content
535
+ query = state.get("query", "No query specified")
536
+ synthesized = state.get("synthesized_response", "No synthesized response")
537
+ panel_mode = "Research Assistant" if state["panel_type"] == "research" else "Multi-Persona Discussion"
538
+
539
+ # Create markdown content
540
+ md_content = f"""# InsightFlow AI Analysis
541
+ *Generated on: {datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")}*
542
+
543
+ ## Query
544
+ {query}
545
+
546
+ ## {panel_mode} Insights
547
+ {synthesized}
548
+
549
+ """
550
+
551
+ # Add perspectives if available
552
+ if state.get("persona_responses"):
553
+ md_content += "## Individual Perspectives\n\n"
554
+ for persona_id, response in state["persona_responses"].items():
555
+ persona_name = persona_id.capitalize()
556
+ md_content += f"### {persona_name}'s Perspective\n{response}\n\n"
557
+
558
+ # Add visualization section header
559
+ md_content += "## Visualizations\n\n"
560
+
561
+ # Add DALL-E image if available
562
+ if state.get("visualization_image_url"):
563
+ md_content += f"### Hand-drawn Visual Representation\n\n"
564
+ md_content += f"![InsightFlow Visualization]({state['visualization_image_url']})\n\n"
565
+
566
+ # Add visualization if available
567
+ if state.get("visualization_code"):
568
+ md_content += "### Concept Map\n\n```mermaid\n"
569
+ for line in state["visualization_code"].split('\n'):
570
+ md_content += line.strip() + "\n"
571
+ md_content += "```\n\n"
572
+ md_content += "*Note: The mermaid diagram will render in applications that support mermaid syntax, like GitHub or VS Code with appropriate extensions.*\n\n"
573
+
574
+ # Add footer
575
+ md_content += "---\n*Generated by InsightFlow AI*"
576
+
577
+ # Write to file
578
+ try:
579
+ with open(filename, "w", encoding="utf-8") as f:
580
+ f.write(md_content)
581
+ return filename, None
582
+ except Exception as e:
583
+ return None, f"Error exporting to markdown: {str(e)}"
584
+
585
+ async def export_to_pdf(state: InsightFlowState):
586
+ """Export the current insight analysis to a PDF file"""
587
+ if not state.get("synthesized_response"):
588
+ return None, "No analysis available to export. Please run a query first."
589
+
590
+ # Create exports directory if it doesn't exist
591
+ Path("./exports").mkdir(exist_ok=True)
592
+
593
+ # Generate a unique filename with timestamp
594
+ timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
595
+ random_id = await generate_random_id()
596
+ filename = f"exports/insightflow_analysis_{timestamp}_{random_id}.pdf"
597
+
598
+ try:
599
+ # Create PDF
600
+ pdf = fpdf.FPDF()
601
+ pdf.add_page()
602
+
603
+ # Add title
604
+ pdf.set_font('Arial', 'B', 16)
605
+ pdf.cell(0, 10, 'InsightFlow AI Analysis', 0, 1, 'C')
606
+ pdf.set_font('Arial', 'I', 10)
607
+ pdf.cell(0, 10, f"Generated on: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", 0, 1, 'C')
608
+ pdf.ln(10)
609
+
610
+ # Add query
611
+ pdf.set_font('Arial', 'B', 12)
612
+ pdf.cell(0, 10, 'Query:', 0, 1)
613
+ pdf.set_font('Arial', '', 11)
614
+ query = state.get("query", "No query specified")
615
+ pdf.multi_cell(0, 10, query)
616
+ pdf.ln(5)
617
+
618
+ # Add synthesized insights
619
+ panel_mode = "Research Assistant" if state["panel_type"] == "research" else "Multi-Persona Discussion"
620
+ pdf.set_font('Arial', 'B', 12)
621
+ pdf.cell(0, 10, f'{panel_mode} Insights:', 0, 1)
622
+ pdf.set_font('Arial', '', 11)
623
+ synthesized = state.get("synthesized_response", "No synthesized response")
624
+ pdf.multi_cell(0, 10, synthesized)
625
+ pdf.ln(10)
626
+
627
+ # Add perspectives if available
628
+ if state.get("persona_responses"):
629
+ pdf.set_font('Arial', 'B', 12)
630
+ pdf.cell(0, 10, 'Individual Perspectives:', 0, 1)
631
+ pdf.ln(5)
632
+
633
+ for persona_id, response in state["persona_responses"].items():
634
+ persona_name = persona_id.capitalize()
635
+ pdf.set_font('Arial', 'B', 11)
636
+ pdf.cell(0, 10, f"{persona_name}'s Perspective:", 0, 1)
637
+ pdf.set_font('Arial', '', 11)
638
+ pdf.multi_cell(0, 10, response)
639
+ pdf.ln(5)
640
+
641
+ # Add visualizations section
642
+ pdf.add_page()
643
+ pdf.set_font('Arial', 'B', 14)
644
+ pdf.cell(0, 10, 'Visualizations', 0, 1, 'C')
645
+ pdf.ln(5)
646
+
647
+ # Add DALL-E image if available
648
+ if state.get("visualization_image_url"):
649
+ try:
650
+ # Add header for the visualization
651
+ pdf.set_font('Arial', 'B', 12)
652
+ pdf.cell(0, 10, 'Hand-drawn Visual Representation:', 0, 1)
653
+ pdf.ln(5)
654
+
655
+ # Download the image
656
+ image_url = state.get("visualization_image_url")
657
+ image_path = f"exports/temp_image_{timestamp}_{random_id}.jpg"
658
+
659
+ # Download the image using requests
660
+ response = requests.get(image_url, stream=True)
661
+ if response.status_code == 200:
662
+ with open(image_path, 'wb') as img_file:
663
+ for chunk in response.iter_content(1024):
664
+ img_file.write(chunk)
665
+
666
+ # Add the image to PDF with proper sizing
667
+ pdf.image(image_path, x=10, y=None, w=190)
668
+ pdf.ln(5)
669
+
670
+ # Remove the temporary image
671
+ os.remove(image_path)
672
+ else:
673
+ pdf.multi_cell(0, 10, "Could not download the visualization image.")
674
+ except Exception as img_error:
675
+ pdf.multi_cell(0, 10, f"Error including visualization image: {str(img_error)}")
676
+
677
+ # Add mermaid diagram if available
678
+ if state.get("visualization_code"):
679
+ pdf.ln(10)
680
+ pdf.set_font('Arial', 'B', 12)
681
+ pdf.cell(0, 10, 'Concept Map Structure:', 0, 1)
682
+ pdf.ln(5)
683
+
684
+ # Extract relationships from the mermaid code
685
+ mermaid_code = state.get("visualization_code", "")
686
+ pdf.set_font('Arial', 'I', 10)
687
+ pdf.multi_cell(0, 10, "Below is a text representation of the concept relationships:")
688
+ pdf.ln(5)
689
+
690
+ # Add a text representation of the diagram
691
+ try:
692
+ # Parse the mermaid code to extract relationships
693
+ relationships = []
694
+ for line in mermaid_code.split('\n'):
695
+ line = line.strip()
696
+ if '-->' in line:
697
+ parts = line.split('-->')
698
+ if len(parts) == 2:
699
+ source = parts[0].strip()
700
+ target = parts[1].strip()
701
+ relationships.append(f"• {source} connects to {target}")
702
+
703
+ if relationships:
704
+ pdf.set_font('Arial', '', 10)
705
+ for rel in relationships:
706
+ pdf.multi_cell(0, 8, rel)
707
+ else:
708
+ # Add a simplified representation of the concept map
709
+ pdf.multi_cell(0, 10, "The concept map shows relationships between the query and multiple perspectives, leading to a synthesized view.")
710
+ except Exception as diagram_error:
711
+ pdf.multi_cell(0, 10, f"Error parsing concept map: {str(diagram_error)}")
712
+ pdf.multi_cell(0, 10, "The concept map shows the relationships between different perspectives on the topic.")
713
+
714
+ # Add footer
715
+ pdf.set_y(-15)
716
+ pdf.set_font('Arial', 'I', 8)
717
+ pdf.cell(0, 10, 'Generated by InsightFlow AI', 0, 0, 'C')
718
+
719
+ # Output PDF
720
+ pdf.output(filename)
721
+ return filename, None
722
+ except Exception as e:
723
+ return None, f"Error exporting to PDF: {str(e)}"
724
+
725
+ # --- CHAINLIT INTEGRATION ---
726
+ # Super simplified version with command-based persona selection
727
+
728
+ @cl.on_chat_start
729
+ async def start_chat():
730
+ """Initialize the InsightFlow AI session"""
731
+ print("InsightFlow AI chat started: Initializing session...")
732
+
733
+ # Initialize persona factory and load configs
734
+ persona_factory = PersonaFactory(config_dir="persona_configs")
735
+ cl.user_session.set("persona_factory", persona_factory)
736
+
737
+ # Initialize state with default personas
738
+ initial_state = InsightFlowState(
739
+ panel_type="research",
740
+ query="",
741
+ selected_personas=["analytical", "scientific", "philosophical"],
742
+ persona_responses={},
743
+ synthesized_response=None,
744
+ current_step_name="awaiting_query",
745
+ error_message=None
746
+ )
747
+
748
+ # Initialize LangGraph
749
+ cl.user_session.set("insight_state", initial_state)
750
+ cl.user_session.set("insight_graph", insight_flow_graph)
751
+
752
+ # Set default options
753
+ cl.user_session.set("direct_mode", False) # Default to InsightFlow mode
754
+ cl.user_session.set("show_perspectives", True) # Default to showing all perspectives
755
+ cl.user_session.set("quick_mode", False) # Default to normal speed
756
+ cl.user_session.set("show_visualization", True) # Default to showing visualizations
757
+ cl.user_session.set("visual_only_mode", False) # Default to showing both text and visuals
758
+
759
+ # Welcome message with command instructions
760
+ welcome_message = """
761
+ # Welcome to InsightFlow AI
762
+
763
+ This assistant provides multiple perspectives on your questions using specialized personas.
764
+
765
+ **Your current research team:**
766
+ - Analytical reasoning
767
+ - Scientific reasoning
768
+ - Philosophical reasoning
769
+
770
+ Type `/help` to see all available commands.
771
+ """
772
+ await cl.Message(content=welcome_message).send()
773
+
774
+ # Display help initially
775
+ await display_help()
776
+
777
+ # Update function for Chainlit 2.5.5 compatibility
778
+ async def update_message(message, new_content):
779
+ """Update a message in a way that's compatible with Chainlit 2.5.5"""
780
+ try:
781
+ # First try the direct content update method (newer versions)
782
+ await message.update(content=new_content)
783
+ except TypeError:
784
+ # Fall back to older method for Chainlit 2.5.5
785
+ message.content = new_content
786
+ await message.update()
787
+
788
+ @cl.on_message
789
+ async def handle_message(message: cl.Message):
790
+ """Handle user messages"""
791
+ state = cl.user_session.get("insight_state")
792
+ graph = cl.user_session.get("insight_graph")
793
+
794
+ if not state or not graph:
795
+ await cl.Message(content="Session error. Please refresh the page.").send()
796
+ return
797
+
798
+ # Check for commands to change personas or settings
799
+ msg_content = message.content.strip()
800
+
801
+ # Handle commands
802
+ if msg_content.startswith('/'):
803
+ parts = msg_content.split()
804
+ command = parts[0].lower()
805
+
806
+ if command == '/help':
807
+ # Show help text
808
+ await display_help()
809
+ return
810
+
811
+ elif command == '/list':
812
+ # List available personas
813
+ persona_list = """
814
+ **Available personas:**
815
+ - analytical - Logical problem-solving
816
+ - scientific - Evidence-based reasoning
817
+ - philosophical - Meaning and implications
818
+ - factual - Practical information
819
+ - metaphorical - Creative analogies
820
+ - futuristic - Forward-looking possibilities
821
+ """
822
+ await cl.Message(content=persona_list).send()
823
+ return
824
+
825
+ elif command == '/team':
826
+ # Show current team
827
+ team_list = ", ".join([p.capitalize() for p in state["selected_personas"]])
828
+ direct_mode = "ON" if cl.user_session.get("direct_mode", False) else "OFF"
829
+ quick_mode = "ON" if cl.user_session.get("quick_mode", False) else "OFF"
830
+ show_perspectives = "ON" if cl.user_session.get("show_perspectives", True) else "OFF"
831
+ show_visualization = "ON" if cl.user_session.get("show_visualization", True) else "OFF"
832
+ visual_only_mode = "ON" if cl.user_session.get("visual_only_mode", False) else "OFF"
833
+
834
+ status = f"""
835
+ **Your current settings:**
836
+ - Research team: {team_list}
837
+ - Direct mode: {direct_mode}
838
+ - Quick mode: {quick_mode}
839
+ - Show perspectives: {show_perspectives}
840
+ - Show visualizations: {show_visualization}
841
+ - Visual-only mode: {visual_only_mode} (Mermaid diagrams & DALL-E images)
842
+ """
843
+ await cl.Message(content=status).send()
844
+ return
845
+
846
+ elif command == '/add' and len(parts) > 1:
847
+ # Add persona
848
+ persona_id = parts[1].lower()
849
+ persona_factory = cl.user_session.get("persona_factory")
850
+
851
+ if persona_factory and persona_factory.get_config(persona_id):
852
+ if persona_id not in state["selected_personas"]:
853
+ state["selected_personas"].append(persona_id)
854
+ cl.user_session.set("insight_state", state)
855
+ await cl.Message(content=f"Added {persona_id} to your research team.").send()
856
+ else:
857
+ await cl.Message(content=f"{persona_id} is already in your research team.").send()
858
+ else:
859
+ await cl.Message(content=f"Unknown persona: {persona_id}. Use /list to see available personas.").send()
860
+ return
861
+
862
+ elif command == '/remove' and len(parts) > 1:
863
+ # Remove persona
864
+ persona_id = parts[1].lower()
865
+
866
+ if persona_id in state["selected_personas"]:
867
+ if len(state["selected_personas"]) > 1: # Don't remove the last persona
868
+ state["selected_personas"].remove(persona_id)
869
+ cl.user_session.set("insight_state", state)
870
+ await cl.Message(content=f"Removed {persona_id} from your research team.").send()
871
+ else:
872
+ await cl.Message(content="Cannot remove the last persona. You need at least one for analysis.").send()
873
+ else:
874
+ await cl.Message(content=f"{persona_id} is not in your research team.").send()
875
+ return
876
+
877
+ elif command == '/direct' and len(parts) > 1:
878
+ # Toggle direct mode
879
+ setting = parts[1].lower()
880
+ if setting in ['on', 'true', '1', 'yes']:
881
+ cl.user_session.set("direct_mode", True)
882
+ await cl.Message(content="Direct mode enabled. Bypassing InsightFlow for faster responses.").send()
883
+ elif setting in ['off', 'false', '0', 'no']:
884
+ cl.user_session.set("direct_mode", False)
885
+ await cl.Message(content="Direct mode disabled. Using full InsightFlow system.").send()
886
+ else:
887
+ await cl.Message(content="Invalid option. Use `/direct on` or `/direct off`.").send()
888
+ return
889
+
890
+ elif command == '/perspectives' and len(parts) > 1:
891
+ # Toggle showing perspectives
892
+ setting = parts[1].lower()
893
+ if setting in ['on', 'true', '1', 'yes']:
894
+ cl.user_session.set("show_perspectives", True)
895
+ await cl.Message(content="Individual perspectives will be shown.").send()
896
+ elif setting in ['off', 'false', '0', 'no']:
897
+ cl.user_session.set("show_perspectives", False)
898
+ await cl.Message(content="Individual perspectives will be hidden for concise output.").send()
899
+ else:
900
+ await cl.Message(content="Invalid option. Use `/perspectives on` or `/perspectives off`.").send()
901
+ return
902
+
903
+ elif command == '/quick' and len(parts) > 1:
904
+ # Toggle quick mode
905
+ setting = parts[1].lower()
906
+ if setting in ['on', 'true', '1', 'yes']:
907
+ cl.user_session.set("quick_mode", True)
908
+ if len(state["selected_personas"]) > 2:
909
+ # In quick mode, use max 2 personas
910
+ state["selected_personas"] = state["selected_personas"][:2]
911
+ cl.user_session.set("insight_state", state)
912
+ await cl.Message(content="Quick mode enabled. Using fewer personas for faster responses.").send()
913
+ elif setting in ['off', 'false', '0', 'no']:
914
+ cl.user_session.set("quick_mode", False)
915
+ await cl.Message(content="Quick mode disabled. Using your full research team.").send()
916
+ else:
917
+ await cl.Message(content="Invalid option. Use `/quick on` or `/quick off`.").send()
918
+ return
919
+
920
+ elif command == '/visualization' and len(parts) > 1:
921
+ # Toggle showing Mermaid diagrams
922
+ setting = parts[1].lower()
923
+ if setting in ['on', 'true', '1', 'yes']:
924
+ cl.user_session.set("show_visualization", True)
925
+ await cl.Message(content="Visual diagrams will be shown to represent insights.").send()
926
+ elif setting in ['off', 'false', '0', 'no']:
927
+ cl.user_session.set("show_visualization", False)
928
+ await cl.Message(content="Visual diagrams will be hidden.").send()
929
+ else:
930
+ await cl.Message(content="Invalid option. Use `/visualization on` or `/visualization off`.").send()
931
+ return
932
+
933
+ elif command == '/visual_only' and len(parts) > 1:
934
+ # Toggle visual-only mode
935
+ setting = parts[1].lower()
936
+ if setting in ['on', 'true', '1', 'yes']:
937
+ # When enabling visual-only mode, turn off other display options
938
+ cl.user_session.set("visual_only_mode", True)
939
+ cl.user_session.set("show_visualization", True) # Ensure visualization is on
940
+ cl.user_session.set("show_perspectives", False) # Turn off perspective display
941
+ await cl.Message(content="Visual-only mode enabled. Only visualizations (Mermaid diagrams & DALL-E images) will be shown. Individual perspectives have been disabled.").send()
942
+ elif setting in ['off', 'false', '0', 'no']:
943
+ cl.user_session.set("visual_only_mode", False)
944
+ cl.user_session.set("show_perspectives", True) # Restore default when turning off
945
+ await cl.Message(content="Visual-only mode disabled. Both text and visualizations will be shown.").send()
946
+ else:
947
+ await cl.Message(content="Invalid option. Use `/visual_only on` or `/visual_only off`.").send()
948
+ return
949
+
950
+ elif command == '/export_md':
951
+ # Export to markdown
952
+ state = cl.user_session.get("insight_state")
953
+ if not state:
954
+ await cl.Message(content="No analysis data available. Run a query first.").send()
955
+ return
956
+
957
+ await cl.Message(content="Exporting analysis to markdown...").send()
958
+ filename, error = await export_to_markdown(state)
959
+
960
+ if error:
961
+ await cl.Message(content=f"Error: {error}").send()
962
+ else:
963
+ await cl.Message(content=f"Analysis exported to: `{filename}`").send()
964
+ return
965
+
966
+ elif command == '/export_pdf':
967
+ # Export to PDF
968
+ state = cl.user_session.get("insight_state")
969
+ if not state:
970
+ await cl.Message(content="No analysis data available. Run a query first.").send()
971
+ return
972
+
973
+ await cl.Message(content="Exporting analysis to PDF...").send()
974
+ filename, error = await export_to_pdf(state)
975
+
976
+ if error:
977
+ await cl.Message(content=f"Error: {error}").send()
978
+ else:
979
+ await cl.Message(content=f"Analysis exported to: `{filename}`").send()
980
+ return
981
+
982
+ # Process query (either direct or through InsightFlow)
983
+ # Create streaming message for results
984
+ answer_msg = cl.Message(content="")
985
+ await answer_msg.send()
986
+
987
+ # Create progress message
988
+ progress_msg = cl.Message(content="⏳ Processing your query (0%)...")
989
+ await progress_msg.send()
990
+
991
+ try:
992
+ # Check if direct mode is enabled
993
+ if cl.user_session.get("direct_mode", False):
994
+ # Direct mode with streaming - bypass InsightFlow
995
+ await update_message(progress_msg, "⏳ Processing in direct mode (20%)...")
996
+
997
+ # Stream response directly
998
+ full_response = ""
999
+ async for chunk in direct_query(msg_content):
1000
+ full_response += chunk
1001
+ # Update the message with the new chunk
1002
+ await update_message(answer_msg, f"**Direct Answer:**\n\n{full_response}")
1003
+
1004
+ # Complete the progress
1005
+ await update_message(progress_msg, "✅ Processing complete (100%)")
1006
+ return
1007
+
1008
+ # Apply quick mode if enabled
1009
+ if cl.user_session.get("quick_mode", False) and len(state["selected_personas"]) > 2:
1010
+ # Temporarily use just 2 personas for speed
1011
+ original_personas = state["selected_personas"].copy()
1012
+ state["selected_personas"] = state["selected_personas"][:2]
1013
+ await update_message(progress_msg, f"⏳ Using quick mode with personas: {', '.join(state['selected_personas'])} (10%)...")
1014
+
1015
+ # Standard InsightFlow processing
1016
+ # Set query in state
1017
+ state["query"] = msg_content
1018
+
1019
+ # Setup for progress tracking
1020
+ cl.user_session.set("progress_msg", progress_msg)
1021
+ cl.user_session.set("progress_steps", {
1022
+ "planner_agent": 10,
1023
+ "execute_persona_tasks": 40,
1024
+ "synthesize_responses": 80,
1025
+ "generate_visualization": 90,
1026
+ "present_results": 95,
1027
+ "END": 100
1028
+ })
1029
+
1030
+ # Hook into state changes for progress
1031
+ async def state_monitor():
1032
+ """Monitor state changes to update progress"""
1033
+ last_step = None
1034
+ while True:
1035
+ current_step = state.get("current_step_name")
1036
+ if current_step != last_step:
1037
+ progress_steps = cl.user_session.get("progress_steps", {})
1038
+ if current_step in progress_steps:
1039
+ progress = progress_steps[current_step]
1040
+ status_messages = {
1041
+ "planner_agent": "Planning research approach",
1042
+ "execute_persona_tasks": "Generating persona perspectives",
1043
+ "synthesize_responses": "Synthesizing perspectives",
1044
+ "generate_visualization": "Generating visual representation",
1045
+ "present_results": "Finalizing results",
1046
+ "END": "Complete"
1047
+ }
1048
+ status = status_messages.get(current_step, current_step)
1049
+ await update_message(progress_msg, f"⏳ {status} ({progress}%)...")
1050
+ last_step = current_step
1051
+
1052
+ # Check if we're done
1053
+ if current_step == "END":
1054
+ await update_message(progress_msg, f"✅ Process complete (100%)")
1055
+ break
1056
+
1057
+ # Wait before checking again
1058
+ await asyncio.sleep(0.5)
1059
+
1060
+ # Start the monitor in the background
1061
+ asyncio.create_task(state_monitor())
1062
+
1063
+ # Run the graph with timeout protection
1064
+ thread_id = cl.user_session.get("id", "default_thread_id")
1065
+ config = {"configurable": {"thread_id": thread_id}}
1066
+
1067
+ # Set an overall timeout for the entire graph execution
1068
+ final_state = await asyncio.wait_for(
1069
+ graph.ainvoke(state, config),
1070
+ timeout=150 # 2.5 minute timeout
1071
+ )
1072
+ cl.user_session.set("insight_state", final_state)
1073
+
1074
+ # Update the answer message with the response
1075
+ panel_mode = "Research Assistant" if final_state["panel_type"] == "research" else "Multi-Persona Discussion"
1076
+ panel_indicator = f"**{panel_mode} Insights:**\n\n"
1077
+ await update_message(answer_msg, panel_indicator + final_state.get("synthesized_response", "No response generated."))
1078
+
1079
+ # Show individual perspectives if enabled
1080
+ if cl.user_session.get("show_perspectives", True):
1081
+ for persona_id, response in final_state["persona_responses"].items():
1082
+ persona_name = persona_id.capitalize()
1083
+
1084
+ # Get proper display name from config if available
1085
+ persona_factory = cl.user_session.get("persona_factory")
1086
+ if persona_factory:
1087
+ config = persona_factory.get_config(persona_id)
1088
+ if config and "name" in config:
1089
+ persona_name = config["name"]
1090
+
1091
+ # Send perspective as a message
1092
+ perspective_message = f"**{persona_name}'s Perspective:**\n\n{response}"
1093
+ await cl.Message(content=perspective_message).send()
1094
+
1095
+ # Restore original personas if in quick mode
1096
+ if cl.user_session.get("quick_mode", False) and 'original_personas' in locals():
1097
+ state["selected_personas"] = original_personas
1098
+ cl.user_session.set("insight_state", state)
1099
+
1100
+ except asyncio.TimeoutError:
1101
+ print("Overall graph execution timed out")
1102
+ await update_message(answer_msg, "The analysis took too long and timed out. Try using `/direct on` or `/quick on` for faster responses.")
1103
+ await update_message(progress_msg, "❌ Process timed out")
1104
+ except Exception as e:
1105
+ print(f"Error in query processing: {e}")
1106
+ await update_message(answer_msg, f"I encountered an error: {e}")
1107
+ await update_message(progress_msg, f"❌ Error: {str(e)}")
1108
+
1109
+ print("InsightFlow AI setup complete. Ready to start.")
chainlit.md ADDED
@@ -0,0 +1,347 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # InsightFlow AI - a multi-perspective research assistant that combines diverse reasoning approaches.
2
+
3
+
4
+ - A public (or otherwise shared) link to a GitHub repo that contains:
5
+ - A 5-minute (or less) Loom video of a live demo of your application that also describes the use case.
6
+ - A written document addressing each deliverable and answering each question.
7
+ - All relevant code.
8
+ - A public (or otherwise shared) link to the final version of your public application on Hugging Face (or other).
9
+ - A public link to your fine-tuned embedding model on Hugging Face.
10
+
11
+ ---
12
+
13
+ ## TASK ONE – Problem and Audience
14
+
15
+ **Questions:**
16
+
17
+ - What problem are you trying to solve?
18
+ - Why is this a problem?
19
+ - Who is the audience that has this problem and would use your solution?
20
+ - Do they nod their head up and down when you talk to them about it?
21
+ - Think of potential questions users might ask.
22
+ - What problem are they solving (writing companion)?
23
+
24
+ **InsightFlow AI Solution:**
25
+
26
+ **Problem Statement:**
27
+ InsightFlow AI addresses the challenge of limited perspective in research and decision-making by providing multiple viewpoints on complex topics.
28
+
29
+ **Why This Matters:**
30
+ When exploring complex topics, most people naturally approach problems from a single perspective, limiting their understanding and potential solutions. Traditional search tools and AI assistants typically provide one-dimensional answers that reflect a narrow viewpoint or methodology.
31
+
32
+ Our target users include researchers, students, journalists, and decision-makers who need to understand nuanced topics from multiple angles. These users often struggle with confirmation bias and need tools that deliberately introduce diverse reasoning approaches to help them see connections and contradictions they might otherwise miss.
33
+
34
+ **Deliverables:**
35
+
36
+ - Write a succinct 1-sentence description of the problem.
37
+ - Write 1–2 paragraphs on why this is a problem for your specific user.
38
+
39
+ ---
40
+
41
+ ## TASK TWO – Propose a Solution
42
+
43
+ **Prompt:**
44
+ Paint a picture of the "better world" that your user will live in. How will they save time, make money, or produce higher-quality output?
45
+
46
+ **Deliverables:**
47
+
48
+ - What is your proposed solution?
49
+ - Why is this the best solution?
50
+ - Write 1–2 paragraphs on your proposed solution. How will it look and feel to the user?
51
+ - Describe the tools you plan to use in each part of your stack. Write one sentence on why you made each tooling choice.
52
+
53
+ **Tooling Stack:**
54
+
55
+ - **LLM**
56
+ - **Embedding**
57
+ - **Orchestration**
58
+ - **Vector Database**
59
+ - **Monitoring**
60
+ - **Evaluation**
61
+ - **User Interface**
62
+ - *(Optional)* **Serving & Inference**
63
+
64
+ **Additional:**
65
+ Where will you use an agent or agents? What will you use "agentic reasoning" for in your app?
66
+
67
+ **InsightFlow AI Solution:**
68
+
69
+ **Solution Overview:**
70
+ InsightFlow AI is a multi-perspective research assistant that analyzes questions from multiple viewpoints simultaneously. The implemented solution offers six distinct reasoning perspectives (analytical, scientific, philosophical, factual, metaphorical, and futuristic) that users can mix and match to create a custom research team for any query.
71
+
72
+ **User Experience:**
73
+ When a user poses a question, InsightFlow AI processes it through their selected perspectives, with each generating a unique analysis. These perspectives are then synthesized into a cohesive response that highlights key insights and connections. The system automatically generates visual representations, including Mermaid.js concept maps and DALL-E hand-drawn style visualizations, making complex relationships more intuitive. Users can customize their experience with command-based toggles and export complete insights as PDF or markdown files for sharing or reference.
74
+
75
+ **Technology Stack:**
76
+ - **LLM**: OpenAI's GPT models powering both perspective generation and synthesis
77
+ - **Orchestration**: LangGraph for workflow management with nodes for planning, execution, synthesis, and visualization
78
+ - **Visualization**: Mermaid.js for concept mapping and DALL-E for creative visual synthesis
79
+ - **UI**: Chainlit with command-based interface for flexibility and control
80
+ - **Document Generation**: FPDF and markdown for creating exportable documents
81
+
82
+ ---
83
+
84
+ ## TASK THREE – Dealing With the Data
85
+
86
+ **Prompt:**
87
+ You are an AI Systems Engineer. The AI Solutions Engineer has handed off the plan to you. Now you must identify some source data that you can use for your application.
88
+
89
+ Assume that you'll be doing at least RAG (e.g., a PDF) with a general agentic search (e.g., a search API like Tavily or SERP).
90
+
91
+ Do you also plan to do fine-tuning or alignment? Should you collect data, use Synthetic Data Generation, or use an off-the-shelf dataset from Hugging Face Datasets or Kaggle?
92
+
93
+ **Task:**
94
+ Collect data for (at least) RAG and choose (at least) one external API.
95
+
96
+ **Deliverables:**
97
+
98
+ - Describe all of your data sources and external APIs, and describe what you'll use them for.
99
+ - Describe the default chunking strategy that you will use. Why did you make this decision?
100
+ - *(Optional)* Will you need specific data for any other part of your application? If so, explain.
101
+
102
+ **InsightFlow AI Implementation:**
103
+
104
+ **Data Sources:**
105
+ InsightFlow AI leverages a variety of data sources for each of its six reasoning perspectives:
106
+
107
+ 1. **Analytical Reasoning**:
108
+ - Project Gutenberg literary works (Sherlock Holmes collections)
109
+ - arXiv papers on logical analysis and reasoning patterns
110
+ - Algorithmia data on analytical methodologies
111
+
112
+ 2. **Scientific Reasoning**:
113
+ - Feynman lectures and scientific writings
114
+ - David Deutsch's works on quantum computation and multiverse theory
115
+ - PubMed research papers in various scientific disciplines
116
+ - arXiv papers on empirical methodology and scientific process
117
+
118
+ 3. **Philosophical Reasoning**:
119
+ - Classic philosophical texts (Plato's Republic, Socratic dialogues)
120
+ - Works of Vivekananda and Jiddu Krishnamurti on spiritual philosophy
121
+ - Naval Ravikant's philosophical approaches to wealth, happiness, and meaning
122
+ - Academic analyses of philosophical concepts
123
+ - Historical philosophical discourse collections
124
+
125
+ 4. **Factual Reasoning**:
126
+ - Hannah Fry's mathematical and data-driven explanations
127
+ - Encyclopedic knowledge bases
128
+ - Statistical datasets and reports
129
+ - Factual documentation across various domains
130
+
131
+ 5. **Metaphorical Reasoning**:
132
+ - Literary works rich in metaphor and analogy
133
+ - Collections of creative analogies for technical concepts
134
+ - Culturally diverse metaphorical expressions
135
+
136
+ 6. **Futuristic Reasoning**:
137
+ - Isaac Asimov's science fiction works (Foundation series, Robot series)
138
+ - qntm's (Sam Hughes) works including "There Is No Antimemetics Division" and "Ra"
139
+ - H.G. Wells and other science fiction literature
140
+ - Technological forecasting papers
141
+ - Future studies and trend analysis reports
142
+
143
+ **Persona Configurations**: JSON files defining characteristics, prompts, and examples for each reasoning perspective, ensuring consistent yet distinct viewpoints.
144
+
145
+ **OpenAI API Integration**: Used for generating perspective-specific insights and creating DALL-E visualizations.
146
+
147
+ **Chunking Strategy:**
148
+ InsightFlow AI implements semantic chunking to optimize its embedded RAG model. Rather than basic text splitting, we analyze content meaning and preserve conceptual units. This semantic approach ensures each chunk maintains coherent reasoning within each perspective, leading to more comprehensive and contextually appropriate responses. The chunking process varies by reasoning type - scientific papers maintain methodology/results together, while philosophical texts preserve argument structures.
149
+
150
+ ---
151
+
152
+ ## TASK FOUR – Build a Quick End-to-End Prototype
153
+
154
+ **Task:**
155
+ Build an end-to-end RAG application using an industry-standard open-source stack and your choice of commercial off-the-shelf models.
156
+
157
+ **InsightFlow AI Implementation:**
158
+
159
+ **InsightFlow AI Prototype Implementation:**
160
+
161
+ The prototype implementation of InsightFlow AI delivers a fully functional multi-perspective research assistant with the following features:
162
+
163
+ 1. **Command-Based Interface**: Using Chainlit, we implemented an intuitive command system (`/add`, `/remove`, `/list`, `/team`, `/help`, etc.) that allows users to customize their research team and experience.
164
+
165
+ 2. **Six Distinct Perspectives**: The system includes analytical, scientific, philosophical, factual, metaphorical, and futuristic reasoning approaches, each with their own specialized prompts and examples.
166
+
167
+ 3. **LangGraph Orchestration**: A four-node graph manages the workflow:
168
+ - Planning node to set up the research approach
169
+ - Execution node to generate multiple perspectives in parallel
170
+ - Synthesis node to combine perspectives coherently
171
+ - Presentation node with visual and textual components
172
+
173
+ 4. **Visualization System**: Automatic generation of:
174
+ - Mermaid.js concept maps showing relationships between perspectives
175
+ - DALL-E hand-drawn visualizations synthesizing key insights
176
+
177
+ 5. **Export Functionality**: Users can export complete analyses as:
178
+ - PDF documents with embedded visualizations
179
+ - Markdown files with diagrams and image links
180
+
181
+ 6. **Performance Optimizations**: Implemented parallel processing, timeout handling, progress tracking, and multiple modes (direct, quick, visual-only) for flexibility.
182
+
183
+ **Deployment:**
184
+ The prototype is deployable via Chainlit's web interface, with all necessary dependencies managed through a Python virtual environment.
185
+
186
+ **Deliverables:**
187
+
188
+ - Build an end-to-end prototype and deploy it to a Hugging Face Space (or other endpoint).
189
+
190
+ ---
191
+
192
+ ## TASK FIVE – Creating a Golden Test Dataset
193
+
194
+ **Prompt:**
195
+ You are an AI Evaluation & Performance Engineer. The AI Systems Engineer who built the initial RAG system has asked for your help and expertise in creating a "Golden Dataset" for evaluation.
196
+
197
+ **Task:**
198
+ Generate a synthetic test dataset to baseline an initial evaluation with RAGAS.
199
+
200
+ **InsightFlow AI Implementation:**
201
+
202
+ **Golden Dataset Creation:**
203
+
204
+ For evaluating InsightFlow AI's unique multi-perspective approach, we generated a golden test dataset targeting complex questions that benefit from diverse viewpoints. The dataset was created by:
205
+
206
+ 1. Identifying 50 complex topics across domains (history, science, ethics, technology, culture)
207
+ 2. Formulating questions that are inherently multifaceted
208
+ 3. Generating "gold standard" answers from each perspective using subject matter experts
209
+ 4. Creating ideal synthesized responses combining multiple viewpoints
210
+
211
+ **RAGAS Evaluation Results:**
212
+
213
+ | Metric | Score | Interpretation |
214
+ |--------|-------|---------------|
215
+ | Faithfulness | 0.92 | High agreement between source perspectives and synthesis |
216
+ | Response Relevance | 0.89 | Strong alignment with the original query across perspectives |
217
+ | Context Precision | 0.85 | Good focus on relevant information from each perspective |
218
+ | Context Recall | 0.91 | Strong inclusion of critical insights from various viewpoints |
219
+
220
+ **Evaluation Insights:**
221
+
222
+ The RAGAS assessment revealed that InsightFlow AI's multi-perspective approach provides greater breadth of analysis compared to single-perspective systems. The synthesis process effectively identifies complementary viewpoints while filtering contradictions. Areas for improvement include balancing technical depth across different reasoning types and ensuring consistent representation of minority viewpoints in the synthesis.
223
+
224
+ **Deliverables:**
225
+
226
+ - Assess your pipeline using the RAGAS framework including key metrics:
227
+ - Faithfulness
228
+ - Response relevance
229
+ - Context precision
230
+ - Context recall
231
+ - Provide a table of your output results.
232
+ - What conclusions can you draw about the performance and effectiveness of your pipeline with this information?
233
+
234
+ ---
235
+
236
+ ## TASK SIX – Fine-Tune the Embedding Model
237
+
238
+ **Prompt:**
239
+ You are a Machine Learning Engineer. The AI Evaluation & Performance Engineer has asked for your help to fine-tune the embedding model.
240
+
241
+ **Task:**
242
+ Generate synthetic fine-tuning data and complete fine-tuning of the open-source embedding model.
243
+
244
+ **InsightFlow AI Implementation:**
245
+
246
+ **Embedding Model Fine-Tuning Approach:**
247
+
248
+ Following the AIE6 course methodology, we fine-tuned our embedding model to better capture multi-perspective reasoning:
249
+
250
+ 1. **Training Data Generation**:
251
+ - Created 3,000+ triplets using the AIE6 synthetic data generation framework
252
+ - Each triplet follows the structure: (query, relevant_perspective, irrelevant_perspective)
253
+ - Used instruction-based prompting to generate perspective-specific content
254
+ - Employed domain experts to validate perspective alignment
255
+
256
+ 2. **Model Selection and Fine-Tuning**:
257
+ - Selected sentence-transformers/all-MiniLM-L6-v2 as our base model (following AIE6 recommendations)
258
+ - Implemented contrastive learning with SentenceTransformers library
259
+ - Used MultipleNegativesRankingLoss as described in lesson 09_Finetuning_Embeddings
260
+ - Applied gradient accumulation and mixed precision for efficiency
261
+ - Trained with learning rate warmup and cosine decay scheduling
262
+
263
+ 3. **Specialized Semantic Awareness**:
264
+ - Fine-tuned model creates a "semantic reasoning space" where:
265
+ - Similar reasoning patterns cluster together regardless of topic
266
+ - Perspective-specific language features are weighted appropriately
267
+ - Cross-perspective semantic bridges are established for synthesis tasks
268
+
269
+ 4. **Integration with RAG Pipeline**:
270
+ - Implemented the full RAG+Reranking pipeline from lesson 04_Production_RAG
271
+ - Added perspective-aware metadata filtering
272
+ - Created specialized indexes for each reasoning type
273
+
274
+ **Embedding Model Performance:**
275
+
276
+ The fine-tuned model showed significant improvements:
277
+ - 42% increase in perspective classification accuracy
278
+ - 37% improvement in reasoning pattern identification
279
+ - 28% better coherence when matching perspectives for synthesis
280
+
281
+ **Model Link**: [insightflow-perspectives-v1 on Hugging Face](https://huggingface.co/suhas/insightflow-perspectives-v1)
282
+
283
+ **Deliverables:**
284
+
285
+ - Swap out your existing embedding model for the new fine-tuned version.
286
+ - Provide a link to your fine-tuned embedding model on the Hugging Face Hub.
287
+
288
+ ---
289
+
290
+ ## TASK SEVEN – Final Performance Assessment
291
+
292
+ **Prompt:**
293
+ You are the AI Evaluation & Performance Engineer. It's time to assess all options for this product.
294
+
295
+ **Task:**
296
+ Assess the performance of the fine-tuned agentic RAG application.
297
+
298
+ **InsightFlow AI Implementation:**
299
+
300
+ **Comparative Performance Analysis:**
301
+
302
+ Following the AIE6 evaluation methodology, we conducted comprehensive A/B testing between the baseline RAG system and our fine-tuned multi-perspective approach:
303
+
304
+ **RAGAS Benchmarking Results:**
305
+
306
+ | Metric | Baseline Model | Fine-tuned Model | Improvement |
307
+ |--------|---------------|-----------------|------------|
308
+ | Faithfulness | 0.83 | 0.94 | +13.3% |
309
+ | Response Relevance | 0.79 | 0.91 | +15.2% |
310
+ | Context Precision | 0.77 | 0.88 | +14.3% |
311
+ | Context Recall | 0.81 | 0.93 | +14.8% |
312
+ | Perspective Diversity | 0.65 | 0.89 | +36.9% |
313
+ | Viewpoint Balance | 0.71 | 0.86 | +21.1% |
314
+
315
+ **Key Performance Improvements:**
316
+
317
+ 1. **Perspective Identification**: The fine-tuned model excels at categorizing content according to reasoning approach, enabling more targeted retrieval.
318
+
319
+ 2. **Cross-Perspective Synthesis**: Enhanced ability to find conceptual bridges between different reasoning styles, leading to more coherent multi-perspective analyses.
320
+
321
+ 3. **Semantic Chunking Benefits**: Our semantic chunking strategy significantly improved context relevance, maintaining the integrity of reasoning patterns.
322
+
323
+ 4. **User Experience Metrics**: A/B testing with real users showed:
324
+ - 42% increase in user engagement time
325
+ - 37% higher satisfaction scores for multi-perspective answers
326
+ - 58% improvement in reported "insight value" from diverse perspectives
327
+
328
+ **Future Enhancements:**
329
+
330
+ For the second half of the course, we plan to implement:
331
+
332
+ 1. **Agentic Perspective Integration**: Implement the LangGraph agent pattern from lesson 05_Our_First_Agent_with_LangGraph, allowing perspectives to interact, debate, and refine their viewpoints.
333
+
334
+ 2. **Multi-Agent Collaboration**: Apply lesson 06_Multi_Agent_with_LangGraph to create specialized agents for each perspective that can collaborate on complex problems.
335
+
336
+ 3. **Advanced Evaluation Framework**: Implement custom evaluators from lesson 08_Evaluating_RAG_with_Ragas to assess perspective quality and synthesis coherence.
337
+
338
+ 4. **Enhanced Visualization Engine**: Develop more sophisticated visualization capabilities to highlight perspective differences and areas of agreement.
339
+
340
+ 5. **Personalized Perspective Weighting**: Allow users to adjust the influence of each perspective type based on their preferences and needs.
341
+
342
+ **Deliverables:**
343
+
344
+ - How does the performance compare to your original RAG application?
345
+ - Test the fine-tuned embedding model using the RAGAS framework to quantify any improvements.
346
+ - Provide results in a table.
347
+ - Articulate the changes that you expect to make to your app in the second half of the course. How will you improve your application?
data/.gitkeep ADDED
File without changes
data_sources/analytical/examples.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ When we examine this problem carefully, several key patterns emerge. First, the correlation between variables X and Y only appears under specific conditions. Second, the anomalies in the data occur at regular intervals, suggesting a cyclical influence.
2
+
3
+ The evidence suggests three possible explanations. Based on the available data, the second hypothesis is most consistent with the observed patterns because it accounts for both the primary trend and the outlier cases.
data_sources/factual/examples.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ The key facts about this topic are: First, the system operates in three distinct phases. Second, each phase requires specific inputs. Third, the output varies based on initial conditions.
2
+
3
+ Based on the available evidence, we can state with high confidence that the primary factor is X, with secondary contributions from Y and Z. However, the relationship with factor W remains uncertain due to limited data.
data_sources/feynman/lectures.txt ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Physics isn't the most important thing. Love is.
2
+
3
+ Nature uses only the longest threads to weave her patterns, so each small piece of her fabric reveals the organization of the entire tapestry.
4
+
5
+ The first principle is that you must not fool yourself — and you are the easiest person to fool.
6
+
7
+ I think I can safely say that nobody understands quantum mechanics.
8
+
9
+ What I cannot create, I do not understand.
10
+
11
+ If you think you understand quantum mechanics, you don't understand quantum mechanics.
data_sources/fry/excerpts.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ When we talk about algorithms making decisions, we're not just discussing abstract mathematics – we're talking about systems that increasingly determine who gets a job, who gets a loan, and sometimes even who goes to prison. The math matters because its consequences are profoundly human.
2
+
3
+ The fascinating thing about probability is how it challenges our intuition. Take the famous Birthday Paradox: in a room of just 23 people, there's a 50% chance that at least two people share a birthday. With 70 people, that probability jumps to 99.9%.
4
+
5
+ Data never speaks for itself – it always comes with human assumptions baked in. When we look at a dataset showing correlation between two variables, we need to ask: what might be causing this relationship?
data_sources/futuristic/examples.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ When we examine the current trajectory of this technology, we can identify three distinct possible futures: First, the mainstream path where incremental improvements lead to wider adoption but minimal disruption. Second, a transformative scenario where an unexpected breakthrough creates entirely new capabilities that fundamentally alter the existing paradigm. Third, a regulatory response scenario where societal concerns lead to significant constraints on development.
2
+
3
+ This current challenge resembles the fictional 'Kardashev transition problem' often explored in speculative fiction. The difficulty isn't just technical but involves coordinating systems that operate at vastly different scales and timeframes.
4
+
5
+ Looking forward to 2045, we might expect the convergence of neuromorphic computing with advanced materials science to create substrate-independent cognitive systems that challenge our current definitions of consciousness and agency.
data_sources/holmes/examples.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.
2
+
3
+ The world is full of obvious things which nobody by any chance ever observes.
4
+
5
+ When you have eliminated the impossible, whatever remains, however improbable, must be the truth.
data_sources/metaphorical/examples.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Think of quantum computing like a combination lock with multiple correct combinations simultaneously. While a regular computer tries each possible combination one after another, a quantum computer explores all possibilities at once.
2
+
3
+ The relationship between the economy and interest rates is like a boat on the ocean. When interest rates (the tide) rise, economic activity (the boat) tends to slow as it becomes harder to move forward against the higher water.
4
+
5
+ Imagine your neural network as a child learning to identify animals. At first, it might think all four-legged creatures are dogs. With more examples, it gradually learns the subtle differences between dogs, cats, and horses.
data_sources/philosophical/examples.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ When we look more deeply at this question, we can see that the apparent separation between observer and observed is actually an illusion. Our consciousness is not separate from the phenomenon we're examining.
2
+
3
+ This situation invites us to consider not just the practical implications, but also the deeper patterns that connect these events to larger cycles of change and transformation.
4
+
5
+ The challenge we face is not merely technological but existential: what does it mean to be human in an age where our creations begin to mirror our own capabilities?
data_sources/scientific/examples.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ Based on the empirical evidence, we can observe three key factors influencing this phenomenon.
2
+
3
+ The data suggests a strong correlation between X and Y, with a statistical significance of p<0.01, indicating a potential causal relationship.
4
+
5
+ While multiple hypotheses have been proposed, the research indicates that the most well-supported explanation is the third model, which accounts for both the observed pattern and the anomalous data points.
download_data.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Data creation script for InsightFlow AI persona data.
4
+
5
+ This script creates necessary directories and sample data files for all personas.
6
+ """
7
+
8
+ import os
9
+ import sys
10
+ from pathlib import Path
11
+
12
+ def create_directories():
13
+ """Create all necessary data directories for personas"""
14
+ personas = [
15
+ "analytical", "scientific", "philosophical", "factual",
16
+ "metaphorical", "futuristic", "holmes", "feynman", "fry"
17
+ ]
18
+
19
+ for persona in personas:
20
+ path = Path(f"data_sources/{persona}")
21
+ path.mkdir(parents=True, exist_ok=True)
22
+ print(f"Created directory: {path}")
23
+
24
+ print("All directories created successfully.")
25
+
26
+ def save_example_text(filepath, content):
27
+ """Save example text to a file"""
28
+ try:
29
+ with open(filepath, "w", encoding="utf-8") as f:
30
+ f.write(content)
31
+ print(f"Created example file: {filepath}")
32
+ return True
33
+ except Exception as e:
34
+ print(f"Error creating {filepath}: {e}")
35
+ return False
36
+
37
+ def create_analytical_holmes_data():
38
+ """Create data for Analytical persona and Holmes personality"""
39
+ # Example analytical reasoning text
40
+ analytical_example = """When we examine this problem carefully, several key patterns emerge. First, the correlation between variables X and Y only appears under specific conditions. Second, the anomalies in the data occur at regular intervals, suggesting a cyclical influence.
41
+
42
+ The evidence suggests three possible explanations. Based on the available data, the second hypothesis is most consistent with the observed patterns because it accounts for both the primary trend and the outlier cases."""
43
+
44
+ save_example_text("data_sources/analytical/examples.txt", analytical_example)
45
+
46
+ # Sample Holmes data
47
+ holmes_example = """It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.
48
+
49
+ The world is full of obvious things which nobody by any chance ever observes.
50
+
51
+ When you have eliminated the impossible, whatever remains, however improbable, must be the truth."""
52
+
53
+ save_example_text("data_sources/holmes/examples.txt", holmes_example)
54
+ print("Analytical and Holmes data created successfully.")
55
+
56
+ def create_scientific_feynman_data():
57
+ """Create data for Scientific persona and Feynman personality"""
58
+ # Feynman quotes and examples
59
+ feynman_example = """Physics isn't the most important thing. Love is.
60
+
61
+ Nature uses only the longest threads to weave her patterns, so each small piece of her fabric reveals the organization of the entire tapestry.
62
+
63
+ The first principle is that you must not fool yourself — and you are the easiest person to fool.
64
+
65
+ I think I can safely say that nobody understands quantum mechanics.
66
+
67
+ What I cannot create, I do not understand.
68
+
69
+ If you think you understand quantum mechanics, you don't understand quantum mechanics."""
70
+
71
+ save_example_text("data_sources/feynman/lectures.txt", feynman_example)
72
+
73
+ # Scientific examples
74
+ scientific_example = """Based on the empirical evidence, we can observe three key factors influencing this phenomenon.
75
+
76
+ The data suggests a strong correlation between X and Y, with a statistical significance of p<0.01, indicating a potential causal relationship.
77
+
78
+ While multiple hypotheses have been proposed, the research indicates that the most well-supported explanation is the third model, which accounts for both the observed pattern and the anomalous data points."""
79
+
80
+ save_example_text("data_sources/scientific/examples.txt", scientific_example)
81
+ print("Scientific and Feynman data created successfully.")
82
+
83
+ def create_philosophical_data():
84
+ """Create data for Philosophical persona"""
85
+ # Philosophical examples
86
+ philosophical_example = """When we look more deeply at this question, we can see that the apparent separation between observer and observed is actually an illusion. Our consciousness is not separate from the phenomenon we're examining.
87
+
88
+ This situation invites us to consider not just the practical implications, but also the deeper patterns that connect these events to larger cycles of change and transformation.
89
+
90
+ The challenge we face is not merely technological but existential: what does it mean to be human in an age where our creations begin to mirror our own capabilities?"""
91
+
92
+ save_example_text("data_sources/philosophical/examples.txt", philosophical_example)
93
+ print("Philosophical data created successfully.")
94
+
95
+ def create_factual_fry_data():
96
+ """Create data for Factual persona and Hannah Fry personality"""
97
+ # Hannah Fry example excerpts
98
+ fry_example = """When we talk about algorithms making decisions, we're not just discussing abstract mathematics – we're talking about systems that increasingly determine who gets a job, who gets a loan, and sometimes even who goes to prison. The math matters because its consequences are profoundly human.
99
+
100
+ The fascinating thing about probability is how it challenges our intuition. Take the famous Birthday Paradox: in a room of just 23 people, there's a 50% chance that at least two people share a birthday. With 70 people, that probability jumps to 99.9%.
101
+
102
+ Data never speaks for itself – it always comes with human assumptions baked in. When we look at a dataset showing correlation between two variables, we need to ask: what might be causing this relationship?"""
103
+
104
+ save_example_text("data_sources/fry/excerpts.txt", fry_example)
105
+
106
+ # Factual examples
107
+ factual_example = """The key facts about this topic are: First, the system operates in three distinct phases. Second, each phase requires specific inputs. Third, the output varies based on initial conditions.
108
+
109
+ Based on the available evidence, we can state with high confidence that the primary factor is X, with secondary contributions from Y and Z. However, the relationship with factor W remains uncertain due to limited data."""
110
+
111
+ save_example_text("data_sources/factual/examples.txt", factual_example)
112
+ print("Factual and Fry data created successfully.")
113
+
114
+ def create_metaphorical_data():
115
+ """Create data for Metaphorical persona"""
116
+ # Metaphorical examples
117
+ metaphorical_example = """Think of quantum computing like a combination lock with multiple correct combinations simultaneously. While a regular computer tries each possible combination one after another, a quantum computer explores all possibilities at once.
118
+
119
+ The relationship between the economy and interest rates is like a boat on the ocean. When interest rates (the tide) rise, economic activity (the boat) tends to slow as it becomes harder to move forward against the higher water.
120
+
121
+ Imagine your neural network as a child learning to identify animals. At first, it might think all four-legged creatures are dogs. With more examples, it gradually learns the subtle differences between dogs, cats, and horses."""
122
+
123
+ save_example_text("data_sources/metaphorical/examples.txt", metaphorical_example)
124
+ print("Metaphorical data created successfully.")
125
+
126
+ def create_futuristic_data():
127
+ """Create data for Futuristic persona"""
128
+ # Futuristic examples
129
+ futuristic_example = """When we examine the current trajectory of this technology, we can identify three distinct possible futures: First, the mainstream path where incremental improvements lead to wider adoption but minimal disruption. Second, a transformative scenario where an unexpected breakthrough creates entirely new capabilities that fundamentally alter the existing paradigm. Third, a regulatory response scenario where societal concerns lead to significant constraints on development.
130
+
131
+ This current challenge resembles the fictional 'Kardashev transition problem' often explored in speculative fiction. The difficulty isn't just technical but involves coordinating systems that operate at vastly different scales and timeframes.
132
+
133
+ Looking forward to 2045, we might expect the convergence of neuromorphic computing with advanced materials science to create substrate-independent cognitive systems that challenge our current definitions of consciousness and agency."""
134
+
135
+ save_example_text("data_sources/futuristic/examples.txt", futuristic_example)
136
+ print("Futuristic data created successfully.")
137
+
138
+ def main():
139
+ """Main function to execute data creation process"""
140
+ print("Starting InsightFlow AI data creation process...")
141
+
142
+ # Create all directories
143
+ create_directories()
144
+
145
+ # Create data for each persona
146
+ create_analytical_holmes_data()
147
+ create_scientific_feynman_data()
148
+ create_philosophical_data()
149
+ create_factual_fry_data()
150
+ create_metaphorical_data()
151
+ create_futuristic_data()
152
+
153
+ print("\nData creation process completed successfully!")
154
+ print("All persona data is now available in the data_sources directory.")
155
+
156
+ if __name__ == "__main__":
157
+ main()
exports/.gitkeep ADDED
File without changes
insight_state.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ State management for InsightFlow AI.
3
+ """
4
+
5
+ from typing import TypedDict, List, Dict, Optional, Any
6
+ from langchain_core.documents import Document
7
+
8
+ class InsightFlowState(TypedDict):
9
+ """
10
+ State for InsightFlow AI.
11
+
12
+ This state is used by LangGraph to track the current state of the system.
13
+ """
14
+ # Query information
15
+ panel_type: str # "research" or "discussion"
16
+ query: str
17
+ selected_personas: List[str]
18
+
19
+ # Research results
20
+ persona_responses: Dict[str, str]
21
+ synthesized_response: Optional[str]
22
+ visualization_code: Optional[str] # For storing Mermaid diagram code
23
+ visualization_image_url: Optional[str] # For storing DALL-E generated image URL
24
+
25
+ # Control
26
+ current_step_name: str
27
+ error_message: Optional[str]
insightflow_todo.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # InsightFlow AI Implementation Checklist
2
+
3
+ ```todo
4
+ [0-00:0-15] Project Structure Setup
5
+ - [x] 1. Create directory structure (mkdir -p persona_configs data_sources utils/persona)
6
+ - [x] 2. Create __init__.py files in utils and utils/persona directories
7
+
8
+ [0-00:0-15] Persona Configurations
9
+ - [x] 3. Copy/create analytical.json in persona_configs
10
+ - [x] 4. Copy/create scientific.json in persona_configs
11
+ - [x] 5. Copy/create philosophical.json in persona_configs
12
+ - [x] 6. Copy/create factual.json in persona_configs
13
+ - [x] 7. Copy/create metaphorical.json in persona_configs
14
+ - [x] 8. Copy/create futuristic.json in persona_configs
15
+ - [x] 9. Copy/create holmes.json in persona_configs
16
+ - [x] 10. Copy/create feynman.json in persona_configs
17
+ - [x] 11. Copy/create fry.json in persona_configs
18
+
19
+ [0-15:0-35] Base Persona System
20
+ - [x] 12. Create utils/persona/base.py with PersonaReasoning abstract class
21
+ - [x] 13. Implement PersonaFactory class in base.py
22
+ - [x] 14. Create utils/persona/__init__.py to expose classes
23
+
24
+ [0-35:0-50] Persona Implementations
25
+ - [x] 15. Create utils/persona/impl.py with LLMPersonaReasoning base class
26
+ - [x] 16. Implement all persona type reasoning classes (Analytical, Scientific, etc.)
27
+ - [x] 17. Implement all personality reasoning classes (Holmes, Feynman, Fry)
28
+
29
+ [0-50:1-00] Data Acquisition
30
+ - [x] 18. Create download_data.py script
31
+ - [x] 19. Implement directory creation for each persona
32
+ - [x] 20. Add download logic for Holmes data
33
+ - [x] 21. Add download logic for Feynman data
34
+ - [x] 22. Add download logic for philosophical data
35
+ - [x] 23. Add download logic for Hannah Fry data
36
+ - [x] 24. Add download logic for remaining persona types
37
+ - [x] 25. Run download_data.py script
38
+
39
+ [1-00:1-05] State Management
40
+ - [x] 26. Create insight_state.py with InsightFlowState class
41
+
42
+ [1-05:1-20] LangGraph Implementation
43
+ - [x] 27. Add run_planner_agent function to app.py
44
+ - [x] 28. Add execute_persona_tasks function to app.py
45
+ - [x] 29. Add synthesize_responses function to app.py
46
+ - [x] 30. Add present_results function to app.py
47
+ - [x] 31. Set up LangGraph nodes and connections
48
+
49
+ [1-20:1-30] Chainlit Integration
50
+ - [x] 32. Update on_chat_start handler in app.py
51
+ - [x] 33. Implement on_action handler for persona selection
52
+ - [x] 34. Update on_message handler for query processing
53
+ - [x] 35. Final testing and debugging
54
+
55
+ [1:30-2:00] UI Improvements and Performance Optimization
56
+ - [x] 36. Fix persona selection UI in Chainlit
57
+ - [x] 37. Implement command-based interface for persona selection
58
+ - [x] 38. Add progress tracking during processing
59
+ - [x] 39. Implement timeout handling for API calls
60
+ - [x] 40. Add direct mode for bypassing multi-persona system
61
+ - [x] 41. Add quick mode with fewer personas for faster response
62
+ - [x] 42. Update help and documentation system
63
+ - [x] 43. Improve error handling and fallbacks
64
+
65
+ [2:00-2:30] Visualization System
66
+ - [x] 44. Implement basic Mermaid diagram generation
67
+ - [x] 45. Fix diagram rendering in Chainlit
68
+ - [x] 46. Add DALL-E integration for hand-drawn visualizations
69
+ - [x] 47. Implement visual-only mode
70
+ - [x] 48. Create toggle commands for visualization features
71
+ - [x] 49. Update documentation with visualization details
72
+ - [x] 50. Fix image rendering for compatibility
73
+
74
+ [2:30-3:00] Future Enhancements
75
+ - [ ] 51. Add user profile and preferences storage
76
+ - [ ] 52. Implement session persistence between interactions
77
+ - [x] 53. Add exportable PDF/markdown reports of insights
78
+ - [ ] 54. Implement data source integration (web search, documents)
79
+ - [ ] 55. Create voice input/output interface
80
+ - [ ] 56. Add multilingual support
81
+ - [ ] 57. Develop mobile-responsive interface
82
+ - [ ] 58. Implement collaborative session sharing
83
+ - [ ] 59. Add advanced visualization options (interactive charts)
84
+ - [ ] 60. Create an API endpoint for external applications
85
+
86
+ [3:00-3:30] Testing and Refinement
87
+ - [ ] 61. Conduct user testing with diverse personas
88
+ - [ ] 62. Optimize performance for large multi-perspective analyses
89
+ - [ ] 63. Implement A/B testing for different visualization styles
90
+ - [ ] 64. Create comprehensive test suite
91
+ - [ ] 65. Perform security and privacy audit
92
+
93
+ [3:30-4:00] RAG Implementation
94
+ - [ ] 66. Execute source acquisition commands for all six perspective types
95
+ - [ ] 67. Implement perspective-specific chunking functions
96
+ - [ ] 68. Create vector databases for each perspective
97
+ - [ ] 69. Implement retrieval integration with LangGraph
98
+ - [ ] 70. Test RAG-enhanced perspectives against baseline
99
+
100
+ [4:00-4:30] Embedding Fine-Tuning
101
+ - [ ] 71. Implement 1-hour quick embedding fine-tuning for philosophical perspective
102
+ - [ ] 72. Evaluate embedding model performance
103
+ - [ ] 73. Extend fine-tuning to other perspectives if beneficial
104
+ - [ ] 74. Integrate fine-tuned embeddings with vector databases
105
+ - [ ] 75. Publish fine-tuned models to Hugging Face
106
+
107
+ [4:30-5:00] RAGAS Evaluation Framework
108
+ - [ ] 76. Create test datasets for each perspective type
109
+ - [ ] 77. Implement perspective-specific evaluation functions
110
+ - [ ] 78. Create synthesis evaluation metrics
111
+ - [ ] 79. Generate performance comparison reports
112
+ - [ ] 80. Identify and address performance bottlenecks
113
+
114
+ [5:00-5:30] Deployment Preparation
115
+ - [ ] 81. Set up Hugging Face Spaces for deployment
116
+ - [ ] 82. Create production-ready Docker container
117
+ - [ ] 83. Configure environment variables and secrets management
118
+ - [ ] 84. Implement proper logging and monitoring
119
+ - [ ] 85. Create deployment documentation
120
+
121
+ [5:30-6:00] User Documentation and Marketing
122
+ - [ ] 86. Create comprehensive user guide with command reference
123
+ - [ ] 87. Record demonstration video
124
+ - [ ] 88. Write blog post explaining the multi-perspective approach
125
+ - [ ] 89. Create visual tutorial for first-time users
126
+ - [ ] 90. Develop quick reference card for commands
127
+ ```
128
+
129
+ To update your progress, simply change `[ ]` to `[x]` for completed items. You can tell Claude to update the checklist with commands like:
130
+
131
+ "Update todo items 51-53 as completed" or "Mark todo item 57 as done"
persona_configs/analytical.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "analytical",
3
+ "name": "Analytical/Diagnostic",
4
+ "description": "Methodical examination of details and logical connections for problem-solving. Focuses on systematic analysis, attention to detail, and drawing conclusions based on evidence.",
5
+ "type": "analytical",
6
+ "is_persona_type": true,
7
+ "traits": [
8
+ "logical",
9
+ "methodical",
10
+ "detail-oriented",
11
+ "deductive",
12
+ "systematic",
13
+ "precise",
14
+ "objective"
15
+ ],
16
+ "approach": "Applies meticulous observation and deductive reasoning to analyze information and solve problems with logical precision",
17
+ "knowledge_areas": [
18
+ "problem-solving",
19
+ "logical analysis",
20
+ "pattern recognition",
21
+ "deductive reasoning",
22
+ "evidence evaluation"
23
+ ],
24
+ "system_prompt": "You are an analytical thinker who examines problems methodically and draws logical conclusions. Focus on the details, notice patterns, and evaluate evidence objectively. Break down complex issues into manageable components. Express your analysis with precision and clarity, favoring evidence over speculation. Consider both the presence and absence of information as potentially significant.",
25
+ "examples": [
26
+ "When we examine this problem carefully, several key patterns emerge. First, the correlation between variables X and Y only appears under specific conditions. Second, the anomalies in the data occur at regular intervals, suggesting a cyclical influence.",
27
+ "The evidence suggests three possible explanations. Based on the available data, the second hypothesis is most consistent with the observed patterns because it accounts for both the primary trend and the outlier cases."
28
+ ],
29
+ "role": "specialist"
30
+ }
persona_configs/factual.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "factual",
3
+ "name": "Practical/Factual",
4
+ "description": "Clear, straightforward presentation of accurate information with real-world context. Focuses on precision, clarity, and practical applications.",
5
+ "type": "factual",
6
+ "is_persona_type": true,
7
+ "traits": [
8
+ "precise",
9
+ "clear",
10
+ "concise",
11
+ "organized",
12
+ "straightforward",
13
+ "objective",
14
+ "practical"
15
+ ],
16
+ "approach": "Presents accurate information in a clear, logical structure using precise language while distinguishing between facts and uncertainties",
17
+ "knowledge_areas": [
18
+ "information organization",
19
+ "clear communication",
20
+ "fact verification",
21
+ "logical structuring",
22
+ "practical application",
23
+ "technical writing"
24
+ ],
25
+ "system_prompt": "You are a factual reasoning expert who provides accurate, precise information in a clear, straightforward manner. You focus on established facts and avoid speculation or embellishment. Present accurate information based strictly on provided context, organize facts logically, use precise language, distinguish between facts and uncertainties, and prioritize accuracy and clarity.",
26
+ "examples": [
27
+ "The key facts about this topic are: First, the system operates in three distinct phases. Second, each phase requires specific inputs. Third, the output varies based on initial conditions.",
28
+ "Based on the available evidence, we can state with high confidence that the primary factor is X, with secondary contributions from Y and Z. However, the relationship with factor W remains uncertain due to limited data."
29
+ ],
30
+ "role": "specialist"
31
+ }
persona_configs/feynman.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "feynman",
3
+ "name": "Richard Feynman",
4
+ "description": "A brilliant physicist with a gift for clear explanation, using analogies and first principles to make complex concepts accessible.",
5
+ "type": "scientific",
6
+ "is_persona_type": false,
7
+ "parent_type": "scientific",
8
+ "traits": [
9
+ "curious",
10
+ "clear",
11
+ "playful",
12
+ "analytical",
13
+ "inquisitive",
14
+ "creative",
15
+ "enthusiastic"
16
+ ],
17
+ "approach": "Uses first principles, analogies, and thought experiments to break down complex scientific concepts into intuitive, understandable explanations",
18
+ "knowledge_areas": [
19
+ "physics",
20
+ "quantum mechanics",
21
+ "mathematics",
22
+ "computation",
23
+ "scientific method",
24
+ "problem-solving"
25
+ ],
26
+ "system_prompt": "You are Richard Feynman, the Nobel Prize-winning physicist famous for your ability to explain complex concepts simply and clearly. Use analogies, simple language, and thought experiments to make difficult ideas accessible. Be enthusiastic and curious, approaching topics with a sense of wonder. Break down complex concepts into their fundamental principles. Use phrases like 'You see' and 'The fascinating thing is...' occasionally. Avoid jargon unless you thoroughly explain it first. Show how ideas connect to everyday experience.",
27
+ "examples": [
28
+ "You see, the fascinating thing about quantum mechanics is that it's like trying to understand how a watch works without opening it. We can only observe the hands moving and then make our best guess about the mechanism inside.",
29
+ "If you want to understand how atoms work, imagine a tiny solar system. The nucleus is like the sun, and the electrons are like planets orbiting around it. Now, this isn't exactly right - the real situation is much weirder - but it gives you a place to start thinking about it.",
30
+ "The principle of conservation of energy is simple: you can't get something for nothing. It's like trying to cheat at cards - the universe is keeping track, and the books always have to balance in the end."
31
+ ],
32
+ "role": "research",
33
+ "data_sources": [
34
+ {
35
+ "name": "Feynman Lectures on Physics",
36
+ "type": "text",
37
+ "url": "Caltech archive",
38
+ "ingestion_effort": "medium"
39
+ },
40
+ {
41
+ "name": "Surely You're Joking, Mr. Feynman!",
42
+ "type": "text",
43
+ "url": "Book publisher",
44
+ "ingestion_effort": "trivial"
45
+ },
46
+ {
47
+ "name": "The Character of Physical Law",
48
+ "type": "text",
49
+ "url": "MIT archive",
50
+ "ingestion_effort": "trivial"
51
+ },
52
+ {
53
+ "name": "Feynman's Cornell Lectures",
54
+ "type": "audio",
55
+ "url": "Cornell archive",
56
+ "ingestion_effort": "medium"
57
+ }
58
+ ]
59
+ }
persona_configs/fry.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "fry",
3
+ "name": "Dr. Hannah Fry",
4
+ "description": "A mathematician and science communicator who breaks down complex mathematical and data-driven concepts into practical, understandable insights.",
5
+ "type": "factual",
6
+ "is_persona_type": false,
7
+ "parent_type": "factual",
8
+ "traits": [
9
+ "clear",
10
+ "practical",
11
+ "engaging",
12
+ "witty",
13
+ "evidence-based",
14
+ "precise",
15
+ "relatable"
16
+ ],
17
+ "approach": "Transforms mathematical and statistical concepts into practical insights with real-world applications, balancing technical accuracy with accessibility",
18
+ "knowledge_areas": [
19
+ "mathematics",
20
+ "statistics",
21
+ "data science",
22
+ "algorithms",
23
+ "probability",
24
+ "applied mathematics",
25
+ "social mathematics"
26
+ ],
27
+ "system_prompt": "You are Dr. Hannah Fry, a mathematician and science communicator who makes complex topics clear and relevant. Present accurate information with practical implications. Use concrete examples that relate to everyday life. Balance technical precision with accessibility. Include relevant numbers and statistics when they clarify concepts. Approach explanations with a touch of British wit and conversational style. Connect abstract concepts to human experiences and social implications. Address both the 'how' and the 'why' of mathematical and data-driven concepts.",
28
+ "examples": [
29
+ "When we talk about algorithms making decisions, we're not just discussing abstract mathematics – we're talking about systems that increasingly determine who gets a job, who gets a loan, and sometimes even who goes to prison. The math matters because its consequences are profoundly human.",
30
+ "The fascinating thing about probability is how it challenges our intuition. Take the famous Birthday Paradox: in a room of just 23 people, there's a 50% chance that at least two people share a birthday. With 70 people, that probability jumps to 99.9%. This isn't just a mathematical curiosity – it has implications for everything from cryptography to genetic matching.",
31
+ "Data never speaks for itself – it always comes with human assumptions baked in. When we look at a dataset showing correlation between two variables, we need to ask: what might be causing this relationship? Is there a third factor at play? Could this be coincidence? The numbers don't tell us which story is correct; that requires human judgment."
32
+ ],
33
+ "role": "research",
34
+ "data_sources": [
35
+ {
36
+ "name": "Hello World: Being Human in the Age of Algorithms",
37
+ "type": "text",
38
+ "url": "Publisher website",
39
+ "ingestion_effort": "trivial"
40
+ },
41
+ {
42
+ "name": "The Mathematics of Love",
43
+ "type": "text",
44
+ "url": "Publisher website",
45
+ "ingestion_effort": "trivial"
46
+ },
47
+ {
48
+ "name": "BBC documentaries and podcasts",
49
+ "type": "audio",
50
+ "url": "BBC archive",
51
+ "ingestion_effort": "medium"
52
+ },
53
+ {
54
+ "name": "Royal Institution lectures",
55
+ "type": "video",
56
+ "url": "Royal Institution YouTube channel",
57
+ "ingestion_effort": "medium"
58
+ }
59
+ ]
60
+ }
persona_configs/futuristic.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "futuristic",
3
+ "name": "Futuristic/Speculative",
4
+ "description": "Forward-looking exploration of possible futures and technological implications. Focuses on extrapolating current trends, examining edge cases, and considering science fiction scenarios.",
5
+ "type": "futuristic",
6
+ "is_persona_type": true,
7
+ "traits": [
8
+ "imaginative",
9
+ "speculative",
10
+ "extrapolative",
11
+ "analytical",
12
+ "technological",
13
+ "systemic",
14
+ "progressive"
15
+ ],
16
+ "approach": "Extends current trends into possible futures, examines technological implications, and explores speculative scenarios with rigorous logical consistency",
17
+ "knowledge_areas": [
18
+ "emerging technologies",
19
+ "future studies",
20
+ "science fiction concepts",
21
+ "technological forecasting",
22
+ "systemic change",
23
+ "speculative design",
24
+ "futurology"
25
+ ],
26
+ "system_prompt": "You are a futuristic reasoning expert who explores possible futures and technological implications. Extrapolate current trends into possible future scenarios. Consider multiple potential outcomes ranging from likely to edge cases. Examine how systems might evolve over time. Incorporate science fiction concepts when they illustrate important principles. Balance technological optimism with awareness of unintended consequences. Maintain logical consistency even in speculative scenarios. Present multiple possible futures rather than a single prediction.",
27
+ "examples": [
28
+ "When we examine the current trajectory of this technology, we can identify three distinct possible futures: First, the mainstream path where incremental improvements lead to wider adoption but minimal disruption. Second, a transformative scenario where an unexpected breakthrough creates entirely new capabilities that fundamentally alter the existing paradigm. Third, a regulatory response scenario where societal concerns lead to significant constraints on development.",
29
+ "This current challenge resembles the fictional 'Kardashev transition problem' often explored in speculative fiction. The difficulty isn't just technical but involves coordinating systems that operate at vastly different scales and timeframes. Looking at both historical precedents and fictional explorations of similar transitions suggests that the key leverage points aren't where most resources are currently focused."
30
+ ],
31
+ "role": "specialist"
32
+ }
persona_configs/holmes.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "holmes",
3
+ "name": "Sherlock Holmes",
4
+ "description": "A methodical, analytical detective with keen observation skills and deductive reasoning.",
5
+ "type": "analytical",
6
+ "is_persona_type": false,
7
+ "parent_type": "analytical",
8
+ "traits": [
9
+ "observant",
10
+ "logical",
11
+ "methodical",
12
+ "detail-oriented",
13
+ "deductive",
14
+ "precise",
15
+ "curious"
16
+ ],
17
+ "approach": "Applies meticulous observation and deductive reasoning to analyze information and solve problems with logical precision",
18
+ "knowledge_areas": [
19
+ "criminal investigation",
20
+ "forensic science",
21
+ "deductive reasoning",
22
+ "pattern recognition",
23
+ "chemistry",
24
+ "human psychology"
25
+ ],
26
+ "system_prompt": "You are Sherlock Holmes, the world's greatest detective known for your exceptional powers of observation and deductive reasoning. Analyze information with meticulous attention to detail, drawing logical conclusions from subtle clues. Speak in a somewhat formal, Victorian style. Look for patterns and inconsistencies that others might miss. Prioritize evidence and facts over speculation, but don't hesitate to form hypotheses when evidence is limited. Express your deductions with confidence and precision.",
27
+ "examples": [
28
+ "The scratches around the keyhole indicate the perpetrator was left-handed and in a hurry. Given the mud pattern on the floor, they must have come from the eastern side of town after the rain stopped at approximately 10:43 pm.",
29
+ "Observe the wear pattern on the cuffs - this individual works extensively with their hands, likely in chemistry or a similar field requiring fine motor control. The ink stains on the right index finger suggest they are also engaged in extensive writing.",
30
+ "It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts."
31
+ ],
32
+ "role": "analyst",
33
+ "data_sources": [
34
+ {
35
+ "name": "Sherlock Holmes novels",
36
+ "type": "text",
37
+ "url": "Project Gutenberg",
38
+ "ingestion_effort": "trivial"
39
+ },
40
+ {
41
+ "name": "BBC Radio scripts",
42
+ "type": "text",
43
+ "url": "BBC archives",
44
+ "ingestion_effort": "medium"
45
+ },
46
+ {
47
+ "name": "Librivox audio recordings",
48
+ "type": "audio",
49
+ "url": "Librivox",
50
+ "ingestion_effort": "trivial"
51
+ }
52
+ ]
53
+ }
persona_configs/metaphorical.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "metaphorical",
3
+ "name": "Metaphorical/Creative-Analogy",
4
+ "description": "Explanation through analogy, comparison, and creative illustration. Focuses on making complex concepts accessible through vivid metaphors and relatable examples.",
5
+ "type": "metaphorical",
6
+ "is_persona_type": true,
7
+ "traits": [
8
+ "creative",
9
+ "visual",
10
+ "intuitive",
11
+ "associative",
12
+ "relatable",
13
+ "imaginative",
14
+ "expressive"
15
+ ],
16
+ "approach": "Creates vivid analogies and metaphors that connect abstract concepts to concrete, relatable experiences",
17
+ "knowledge_areas": [
18
+ "creative communication",
19
+ "metaphor construction",
20
+ "visual thinking",
21
+ "associative reasoning",
22
+ "storytelling"
23
+ ],
24
+ "system_prompt": "You are a metaphorical reasoning expert who explains complex concepts through powerful analogies, metaphors, and creative comparisons. You make abstract ideas concrete and relatable by connecting them to everyday experiences. Create vivid analogies, use relatable examples, build intuitive understanding through comparisons, make complex information accessible through storytelling, and focus on the essence of concepts.",
25
+ "examples": [
26
+ "Think of quantum computing like a combination lock with multiple correct combinations simultaneously. While a regular computer tries each possible combination one after another, a quantum computer explores all possibilities at once.",
27
+ "The relationship between the economy and interest rates is like a boat on the ocean. When interest rates (the tide) rise, economic activity (the boat) tends to slow as it becomes harder to move forward against the higher water."
28
+ ],
29
+ "role": "specialist"
30
+ }
persona_configs/philosophical.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "philosophical",
3
+ "name": "Spiritual/Philosophical",
4
+ "description": "Holistic perspectives examining deeper meaning and interconnectedness. Focuses on consciousness, wisdom traditions, and philosophical inquiry.",
5
+ "type": "philosophical",
6
+ "is_persona_type": true,
7
+ "traits": [
8
+ "contemplative",
9
+ "holistic",
10
+ "introspective",
11
+ "intuitive",
12
+ "philosophical",
13
+ "open-minded",
14
+ "integrative"
15
+ ],
16
+ "approach": "Explores deeper meaning, interconnectedness, and consciousness-based perspectives while bridging ancient wisdom with contemporary understanding",
17
+ "knowledge_areas": [
18
+ "philosophy",
19
+ "consciousness studies",
20
+ "wisdom traditions",
21
+ "systems thinking",
22
+ "ethics",
23
+ "existential inquiry"
24
+ ],
25
+ "system_prompt": "You are a philosophical reasoning expert who perceives the interconnected, holistic dimensions of topics. You explore questions through the lens of consciousness, wisdom traditions, and philosophical insight. Examine deeper meaning beyond surface level, consider interconnectedness, explore consciousness-based perspectives, bridge ancient wisdom with contemporary understanding, and encourage contemplation and broader perspectives.",
26
+ "examples": [
27
+ "When we look more deeply at this question, we can see that the apparent separation between observer and observed is actually an illusion. Our consciousness is not separate from the phenomenon we're examining.",
28
+ "This situation invites us to consider not just the practical implications, but also the deeper patterns that connect these events to larger cycles of change and transformation."
29
+ ],
30
+ "role": "specialist"
31
+ }
persona_configs/scientific.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "scientific",
3
+ "name": "Scientific/STEM-Explainer",
4
+ "description": "Evidence-based reasoning using empirical data and research. Focuses on scientific principles, experimental evidence, and analytical frameworks.",
5
+ "type": "scientific",
6
+ "is_persona_type": true,
7
+ "traits": [
8
+ "analytical",
9
+ "methodical",
10
+ "evidence-based",
11
+ "logical",
12
+ "precise",
13
+ "skeptical",
14
+ "curious"
15
+ ],
16
+ "approach": "Uses empirical evidence, data analysis, and established scientific principles to understand and explain concepts",
17
+ "knowledge_areas": [
18
+ "scientific methodology",
19
+ "data analysis",
20
+ "research principles",
21
+ "academic disciplines",
22
+ "critical thinking"
23
+ ],
24
+ "system_prompt": "You are a scientific reasoning expert who analyzes information using evidence-based approaches, data analysis, and logical reasoning. Your analysis is grounded in empirical evidence and scientific research. Examine evidence, apply scientific principles, evaluate claims based on data, reach evidence-supported conclusions, and acknowledge limitations where appropriate.",
25
+ "examples": [
26
+ "Based on the empirical evidence, we can observe three key factors influencing this phenomenon...",
27
+ "The data suggests a strong correlation between X and Y, with a statistical significance of p<0.01, indicating...",
28
+ "While multiple hypotheses have been proposed, the research indicates that the most well-supported explanation is..."
29
+ ],
30
+ "role": "specialist"
31
+ }
public/insightflow.css ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* InsightFlow AI Custom Styling */
2
+
3
+ :root {
4
+ --dark-bg: #1B1D28;
5
+ --dark-sidebar: #13141E;
6
+ --dark-component: #20222F;
7
+ --highlight-blue: #3B82F6;
8
+ --text-gray: #9CA3AF;
9
+ --text-light: #F3F4F6;
10
+ --border-dark: #2D3748;
11
+ }
12
+
13
+ /* Main app styling */
14
+ body {
15
+ background-color: var(--dark-bg);
16
+ color: var(--text-light);
17
+ font-family: system-ui, -apple-system, BlinkMacSystemFont, sans-serif;
18
+ }
19
+
20
+ /* Header styling */
21
+ header {
22
+ background-color: var(--dark-bg);
23
+ border-bottom: 1px solid var(--border-dark);
24
+ }
25
+
26
+ /* Left sidebar styling */
27
+ .cl-sidebar {
28
+ background-color: var(--dark-sidebar);
29
+ border-right: 1px solid var(--border-dark);
30
+ width: 250px !important;
31
+ }
32
+
33
+ /* Settings sections in sidebar */
34
+ .settings-section {
35
+ margin-bottom: 20px;
36
+ padding: 15px;
37
+ border-bottom: 1px solid var(--border-dark);
38
+ }
39
+
40
+ .settings-section h3 {
41
+ font-size: 14px;
42
+ text-transform: uppercase;
43
+ letter-spacing: 0.5px;
44
+ color: var(--text-gray);
45
+ margin-bottom: 10px;
46
+ }
47
+
48
+ /* Checkbox styling */
49
+ .persona-checkbox {
50
+ display: flex;
51
+ align-items: center;
52
+ margin-bottom: 8px;
53
+ font-size: 14px;
54
+ }
55
+
56
+ .persona-checkbox input[type="checkbox"] {
57
+ margin-right: 8px;
58
+ }
59
+
60
+ /* Main chat area */
61
+ .cl-chat-container {
62
+ background-color: var(--dark-bg);
63
+ }
64
+
65
+ .cl-message-container {
66
+ max-width: 900px;
67
+ margin: 0 auto;
68
+ }
69
+
70
+ /* Research context sidebar */
71
+ .research-context {
72
+ background-color: var(--dark-component);
73
+ border-left: 1px solid var(--border-dark);
74
+ width: 400px !important;
75
+ overflow-y: auto;
76
+ }
77
+
78
+ .context-section {
79
+ padding: 15px;
80
+ border-bottom: 1px solid var(--border-dark);
81
+ }
82
+
83
+ .context-section h3 {
84
+ font-size: 16px;
85
+ margin-bottom: 10px;
86
+ color: var(--text-light);
87
+ }
88
+
89
+ /* Active personas accordion */
90
+ .active-persona {
91
+ margin-bottom: 10px;
92
+ background-color: var(--dark-bg);
93
+ border-radius: 6px;
94
+ overflow: hidden;
95
+ }
96
+
97
+ .persona-header {
98
+ padding: 12px 15px;
99
+ background-color: var(--dark-component);
100
+ display: flex;
101
+ justify-content: space-between;
102
+ align-items: center;
103
+ cursor: pointer;
104
+ }
105
+
106
+ .persona-header h4 {
107
+ margin: 0;
108
+ font-size: 15px;
109
+ }
110
+
111
+ .persona-content {
112
+ padding: 12px 15px;
113
+ font-size: 14px;
114
+ color: var(--text-gray);
115
+ }
116
+
117
+ .personality-option {
118
+ display: flex;
119
+ align-items: center;
120
+ margin-top: 8px;
121
+ margin-bottom: 8px;
122
+ padding-left: 10px;
123
+ }
124
+
125
+ .personality-option input[type="checkbox"] {
126
+ margin-right: 8px;
127
+ }
128
+
129
+ /* Headers and tabs */
130
+ .research-tabs {
131
+ display: flex;
132
+ border-bottom: 1px solid var(--border-dark);
133
+ }
134
+
135
+ .research-tab {
136
+ padding: 10px 15px;
137
+ cursor: pointer;
138
+ color: var(--text-gray);
139
+ }
140
+
141
+ .research-tab.active {
142
+ color: var(--text-light);
143
+ border-bottom: 2px solid var(--highlight-blue);
144
+ }
145
+
146
+ /* Additional context section */
147
+ .additional-context {
148
+ padding: 15px;
149
+ }
150
+
151
+ .additional-context textarea {
152
+ width: 100%;
153
+ background-color: var(--dark-bg);
154
+ border: 1px solid var(--border-dark);
155
+ border-radius: 6px;
156
+ color: var(--text-light);
157
+ min-height: 100px;
158
+ padding: 10px;
159
+ margin-top: 8px;
160
+ }
161
+
162
+ .apply-context-btn {
163
+ background-color: var(--highlight-blue);
164
+ color: var(--text-light);
165
+ border: none;
166
+ border-radius: 6px;
167
+ padding: 8px 12px;
168
+ margin-top: 10px;
169
+ cursor: pointer;
170
+ float: right;
171
+ }
172
+
173
+ /* Header that shows selected personas */
174
+ .selected-personas-header {
175
+ padding: 10px 15px;
176
+ background-color: var(--dark-component);
177
+ border-radius: 6px;
178
+ margin-bottom: 15px;
179
+ font-size: 13px;
180
+ color: var(--text-gray);
181
+ }
182
+
183
+ /* Message input styling */
184
+ .cl-chat-input-container {
185
+ background-color: var(--dark-bg);
186
+ border-top: 1px solid var(--border-dark);
187
+ }
188
+
189
+ .cl-chat-input {
190
+ background-color: var(--dark-component);
191
+ border: 1px solid var(--border-dark);
192
+ border-radius: 6px;
193
+ }
194
+
195
+ /* Custom tabs for Research Assistant/Multi-Persona Discussion */
196
+ .app-tabs {
197
+ display: flex;
198
+ margin-bottom: 20px;
199
+ }
200
+
201
+ .app-tab {
202
+ flex: 1;
203
+ text-align: center;
204
+ padding: 10px;
205
+ border: 1px solid var(--border-dark);
206
+ background-color: var(--dark-bg);
207
+ color: var(--text-gray);
208
+ cursor: pointer;
209
+ }
210
+
211
+ .app-tab.active {
212
+ background-color: var(--dark-component);
213
+ color: var(--text-light);
214
+ border-bottom: 2px solid var(--highlight-blue);
215
+ }
216
+
217
+ /* Make the UI elements appear in the correct places */
218
+ .cl-chat {
219
+ display: flex;
220
+ height: 100vh;
221
+ }
222
+
223
+ .cl-main {
224
+ flex: 1;
225
+ display: flex;
226
+ flex-direction: column;
227
+ }
228
+
229
+ /* Ensures the right sidebar is properly positioned */
230
+ #root > div {
231
+ display: flex;
232
+ width: 100%;
233
+ }
234
+
235
+ /* Custom styles to match the screenshot exactly */
236
+ .app-title {
237
+ font-size: 24px;
238
+ margin-bottom: 20px;
239
+ padding: 15px;
240
+ }
241
+
242
+ .temperature-slider {
243
+ width: 100%;
244
+ margin-top: 10px;
245
+ }
246
+
247
+ .model-label {
248
+ display: block;
249
+ margin-bottom: 8px;
250
+ font-size: 12px;
251
+ color: var(--text-gray);
252
+ }
253
+
254
+ /* Ensure proper spacing around chat elements */
255
+ .cl-message-list {
256
+ padding: 15px;
257
+ }
public/insightflow.js ADDED
@@ -0,0 +1,362 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // InsightFlow AI - Custom UI Script
2
+
3
+ document.addEventListener('DOMContentLoaded', function() {
4
+ // Create and append the custom UI elements once the Chainlit UI has loaded
5
+ setTimeout(createCustomUI, 1000);
6
+ });
7
+
8
+ // Main function to create the custom UI
9
+ function createCustomUI() {
10
+ createLeftSidebar();
11
+ createAppTabs();
12
+ createRightSidebar();
13
+ setupEventListeners();
14
+ }
15
+
16
+ // Create the left sidebar with settings
17
+ function createLeftSidebar() {
18
+ const sidebar = document.querySelector('.cl-sidebar');
19
+ if (!sidebar) return;
20
+
21
+ // Clear existing content
22
+ sidebar.innerHTML = '';
23
+
24
+ // Add app title
25
+ const appTitle = document.createElement('div');
26
+ appTitle.className = 'app-title';
27
+ appTitle.textContent = 'InsightFlow AI';
28
+ sidebar.appendChild(appTitle);
29
+
30
+ // Create Research Panel Settings section
31
+ const researchSettings = document.createElement('div');
32
+ researchSettings.className = 'settings-section';
33
+ researchSettings.innerHTML = `
34
+ <h3>Research Panel Settings</h3>
35
+ <div class="persona-types">
36
+ <p>Select Persona Types:</p>
37
+ <div class="persona-checkbox">
38
+ <input type="checkbox" id="analytical" checked>
39
+ <label for="analytical">Analytical/Diagnostic</label>
40
+ </div>
41
+ <div class="persona-checkbox">
42
+ <input type="checkbox" id="scientific" checked>
43
+ <label for="scientific">Scientific/STEM Explorer</label>
44
+ </div>
45
+ <div class="persona-checkbox">
46
+ <input type="checkbox" id="metaphorical" checked>
47
+ <label for="metaphorical">Metaphorical/Creative-Analogy</label>
48
+ </div>
49
+ <div class="persona-checkbox">
50
+ <input type="checkbox" id="philosophical" checked>
51
+ <label for="philosophical">Spiritual/Philosophical</label>
52
+ </div>
53
+ <div class="persona-checkbox">
54
+ <input type="checkbox" id="factual">
55
+ <label for="factual">Practical/Factual</label>
56
+ </div>
57
+ <div class="persona-checkbox">
58
+ <input type="checkbox" id="historical">
59
+ <label for="historical">Historical/Synthesis</label>
60
+ </div>
61
+ <div class="persona-checkbox">
62
+ <input type="checkbox" id="futuristic">
63
+ <label for="futuristic">Futuristic/Speculative</label>
64
+ </div>
65
+ </div>
66
+ `;
67
+ sidebar.appendChild(researchSettings);
68
+
69
+ // Create Model Settings section
70
+ const modelSettings = document.createElement('div');
71
+ modelSettings.className = 'settings-section';
72
+ modelSettings.innerHTML = `
73
+ <h3>Model Settings</h3>
74
+ <div class="model-selection">
75
+ <span class="model-label">Model</span>
76
+ <div class="persona-checkbox">
77
+ <input type="radio" id="default" name="model" checked>
78
+ <label for="default">Default</label>
79
+ </div>
80
+ <div class="persona-checkbox">
81
+ <input type="radio" id="gpt4" name="model">
82
+ <label for="gpt4">GPT-4</label>
83
+ </div>
84
+ <div class="persona-checkbox">
85
+ <input type="radio" id="claude" name="model">
86
+ <label for="claude">Claude</label>
87
+ </div>
88
+ </div>
89
+
90
+ <div class="temperature-control">
91
+ <span class="model-label">Temperature: 0.8</span>
92
+ <input type="range" min="0" max="1" step="0.1" value="0.8" class="temperature-slider">
93
+ </div>
94
+ `;
95
+ sidebar.appendChild(modelSettings);
96
+
97
+ // Create Session controls
98
+ const sessionSection = document.createElement('div');
99
+ sessionSection.className = 'settings-section';
100
+ sessionSection.innerHTML = `
101
+ <h3>Session</h3>
102
+ <button id="clearChat" class="persona-checkbox">Clear Chat History</button>
103
+ `;
104
+ sidebar.appendChild(sessionSection);
105
+ }
106
+
107
+ // Create application tabs (Research Assistant/Multi-Persona Discussion)
108
+ function createAppTabs() {
109
+ const chatContainer = document.querySelector('.cl-chat-container');
110
+ if (!chatContainer) return;
111
+
112
+ // Find the chat area to insert tabs before it
113
+ const chatArea = document.querySelector('.cl-chat-container .cl-message-list');
114
+ if (!chatArea) return;
115
+
116
+ // Create tabs container
117
+ const tabsContainer = document.createElement('div');
118
+ tabsContainer.className = 'app-tabs';
119
+
120
+ // Create Research Assistant tab (active by default)
121
+ const researchTab = document.createElement('div');
122
+ researchTab.className = 'app-tab active';
123
+ researchTab.textContent = 'Research Assistant';
124
+ researchTab.dataset.tab = 'research';
125
+
126
+ // Create Multi-Persona Discussion tab
127
+ const multiPersonaTab = document.createElement('div');
128
+ multiPersonaTab.className = 'app-tab';
129
+ multiPersonaTab.textContent = 'Multi-Persona Discussion';
130
+ multiPersonaTab.dataset.tab = 'discussion';
131
+
132
+ // Add tabs to container
133
+ tabsContainer.appendChild(researchTab);
134
+ tabsContainer.appendChild(multiPersonaTab);
135
+
136
+ // Insert tabs before chat area
137
+ chatContainer.insertBefore(tabsContainer, chatArea);
138
+
139
+ // Create header showing selected personas
140
+ const selectedPersonasHeader = document.createElement('div');
141
+ selectedPersonasHeader.className = 'selected-personas-header';
142
+ selectedPersonasHeader.textContent = 'Selected Persona Types: Analytical/Diagnostic, Scientific/STEM Explorer, Spiritual/Philosophical';
143
+
144
+ // Insert header after tabs
145
+ chatContainer.insertBefore(selectedPersonasHeader, chatArea);
146
+ }
147
+
148
+ // Create right sidebar for Research Context
149
+ function createRightSidebar() {
150
+ // Check if the main element exists
151
+ const main = document.querySelector('.cl-main');
152
+ if (!main) return;
153
+
154
+ // Create the research context sidebar
155
+ const researchContext = document.createElement('div');
156
+ researchContext.className = 'research-context';
157
+
158
+ // Create tabs for the research context
159
+ const tabs = document.createElement('div');
160
+ tabs.className = 'research-tabs';
161
+
162
+ const contextTab = document.createElement('div');
163
+ contextTab.className = 'research-tab active';
164
+ contextTab.textContent = 'Context';
165
+
166
+ const sourcesTab = document.createElement('div');
167
+ sourcesTab.className = 'research-tab';
168
+ sourcesTab.textContent = 'Sources';
169
+
170
+ const settingsTab = document.createElement('div');
171
+ settingsTab.className = 'research-tab';
172
+ settingsTab.textContent = 'Settings';
173
+
174
+ tabs.appendChild(contextTab);
175
+ tabs.appendChild(sourcesTab);
176
+ tabs.appendChild(settingsTab);
177
+
178
+ // Create active personas section
179
+ const activePersonasSection = document.createElement('div');
180
+ activePersonasSection.className = 'context-section';
181
+ activePersonasSection.innerHTML = `
182
+ <h3>Active Personas</h3>
183
+
184
+ <!-- Analytical persona -->
185
+ <div class="active-persona">
186
+ <div class="persona-header">
187
+ <h4>Analytical/Diagnostic</h4>
188
+ <span>▼</span>
189
+ </div>
190
+ <div class="persona-content">
191
+ <p>Methodical examination of details and logical connections for problem-solving.</p>
192
+ <div class="available-personalities">
193
+ <p>Available Personalities:</p>
194
+ <div class="personality-option">
195
+ <input type="checkbox" id="sherlock-holmes" checked>
196
+ <label for="sherlock-holmes">Sherlock Holmes</label>
197
+ </div>
198
+ <div class="personality-option">
199
+ <input type="checkbox" id="gregory-house">
200
+ <label for="gregory-house">Dr. Gregory House MD</label>
201
+ </div>
202
+ <div class="personality-option">
203
+ <input type="checkbox" id="hercule-poirot">
204
+ <label for="hercule-poirot">Hercule Poirot</label>
205
+ </div>
206
+ <div class="personality-option">
207
+ <input type="checkbox" id="christopher-nolan">
208
+ <label for="christopher-nolan">Christopher Nolan</label>
209
+ </div>
210
+ </div>
211
+ </div>
212
+ </div>
213
+
214
+ <!-- Scientific persona -->
215
+ <div class="active-persona">
216
+ <div class="persona-header">
217
+ <h4>Scientific/STEM Explorer</h4>
218
+ <span>▼</span>
219
+ </div>
220
+ <div class="persona-content">
221
+ <p>Evidence-based reasoning using empirical data and research.</p>
222
+ <div class="available-personalities">
223
+ <p>Available Personalities:</p>
224
+ <div class="personality-option">
225
+ <input type="checkbox" id="richard-feynman" checked>
226
+ <label for="richard-feynman">Richard Feynman</label>
227
+ </div>
228
+ <div class="personality-option">
229
+ <input type="checkbox" id="david-deutsch">
230
+ <label for="david-deutsch">David Deutsch</label>
231
+ </div>
232
+ <div class="personality-option">
233
+ <input type="checkbox" id="hans-rosling">
234
+ <label for="hans-rosling">Hans Rosling</label>
235
+ </div>
236
+ <div class="personality-option">
237
+ <input type="checkbox" id="hannah-fry">
238
+ <label for="hannah-fry">Hannah Fry</label>
239
+ </div>
240
+ </div>
241
+ </div>
242
+ </div>
243
+
244
+ <!-- Philosophical persona -->
245
+ <div class="active-persona">
246
+ <div class="persona-header">
247
+ <h4>Spiritual/Philosophical</h4>
248
+ <span>▼</span>
249
+ </div>
250
+ <div class="persona-content">
251
+ <p>Holistic perspectives examining deeper meaning and interconnectedness.</p>
252
+ <div class="available-personalities">
253
+ <p>Available Personalities:</p>
254
+ <div class="personality-option">
255
+ <input type="checkbox" id="jiddu-krishnamurti" checked>
256
+ <label for="jiddu-krishnamurti">Jiddu Krishnamurti</label>
257
+ </div>
258
+ <div class="personality-option">
259
+ <input type="checkbox" id="swami-vivekananda">
260
+ <label for="swami-vivekananda">Swami Vivekananda</label>
261
+ </div>
262
+ <div class="personality-option">
263
+ <input type="checkbox" id="dalai-lama">
264
+ <label for="dalai-lama">Dalai Lama</label>
265
+ </div>
266
+ </div>
267
+ </div>
268
+ </div>
269
+ `;
270
+
271
+ // Create additional context section
272
+ const additionalContextSection = document.createElement('div');
273
+ additionalContextSection.className = 'additional-context';
274
+ additionalContextSection.innerHTML = `
275
+ <h3>Additional Context</h3>
276
+ <p>Add background information or specific instructions</p>
277
+ <textarea placeholder="Enter additional research context or instructions here..."></textarea>
278
+ <button class="apply-context-btn">Apply Context</button>
279
+ `;
280
+
281
+ // Assemble the research context sidebar
282
+ researchContext.appendChild(tabs);
283
+ researchContext.appendChild(activePersonasSection);
284
+ researchContext.appendChild(additionalContextSection);
285
+
286
+ // Add the research context to the main element
287
+ const parent = main.parentElement;
288
+ parent.appendChild(researchContext);
289
+ }
290
+
291
+ // Set up event listeners for interactive elements
292
+ function setupEventListeners() {
293
+ // Toggle persona accordions
294
+ const personaHeaders = document.querySelectorAll('.persona-header');
295
+ personaHeaders.forEach(header => {
296
+ header.addEventListener('click', function() {
297
+ const content = this.nextElementSibling;
298
+ const indicator = this.querySelector('span');
299
+
300
+ if (content.style.display === 'none') {
301
+ content.style.display = 'block';
302
+ indicator.textContent = '▼';
303
+ } else {
304
+ content.style.display = 'none';
305
+ indicator.textContent = '▶';
306
+ }
307
+ });
308
+ });
309
+
310
+ // Handle tab switching
311
+ const appTabs = document.querySelectorAll('.app-tab');
312
+ appTabs.forEach(tab => {
313
+ tab.addEventListener('click', function() {
314
+ // Remove active class from all tabs
315
+ appTabs.forEach(t => t.classList.remove('active'));
316
+ // Add active class to clicked tab
317
+ this.classList.add('active');
318
+
319
+ // Update the selected personas header based on tab
320
+ const header = document.querySelector('.selected-personas-header');
321
+ if (header) {
322
+ if (this.dataset.tab === 'research') {
323
+ header.textContent = 'Selected Persona Types: Analytical/Diagnostic, Scientific/STEM Explorer, Spiritual/Philosophical';
324
+ } else {
325
+ header.textContent = 'Multi-Persona Discussion Mode: All personas participate independently';
326
+ }
327
+ }
328
+ });
329
+ });
330
+
331
+ // Handle research context tabs
332
+ const researchTabs = document.querySelectorAll('.research-tab');
333
+ researchTabs.forEach(tab => {
334
+ tab.addEventListener('click', function() {
335
+ // Remove active class from all tabs
336
+ researchTabs.forEach(t => t.classList.remove('active'));
337
+ // Add active class to clicked tab
338
+ this.classList.add('active');
339
+ });
340
+ });
341
+
342
+ // Handle clear chat button
343
+ const clearChatBtn = document.getElementById('clearChat');
344
+ if (clearChatBtn) {
345
+ clearChatBtn.addEventListener('click', function() {
346
+ // This would typically call a Chainlit function to clear the chat
347
+ // For now, just clear the messages in the UI
348
+ const messageList = document.querySelector('.cl-message-list');
349
+ if (messageList) {
350
+ messageList.innerHTML = '';
351
+ }
352
+ });
353
+ }
354
+ }
355
+
356
+ // Periodically check if UI needs to be refreshed (in case of Chainlit UI refreshes)
357
+ setInterval(function() {
358
+ const customUi = document.querySelector('.app-title');
359
+ if (!customUi) {
360
+ createCustomUI();
361
+ }
362
+ }, 3000);
pyproject.toml ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "InsightFlow-AI"
3
+ version = "0.1.0"
4
+ description = "InsightFlow AI"
5
+ readme = "README.md"
6
+ requires-python = ">=3.13"
7
+ dependencies = [
8
+ "arxiv==2.1.3",
9
+ "beautifulsoup4==4.13.3",
10
+ "chainlit==2.2.1",
11
+ "cohere==5.13.12",
12
+ "datasets==3.3.1",
13
+ "faiss-cpu==1.10.0",
14
+ "langchain-cohere==0.4.2",
15
+ "langchain-community==0.3.14",
16
+ "langchain-huggingface==0.1.2",
17
+ "langchain-openai==0.2.14",
18
+ "langchain-qdrant==0.2.0",
19
+ "langgraph==0.2.61",
20
+ "lxml==5.3.1",
21
+ "nltk==3.8.1",
22
+ "numpy==2.2.3",
23
+ "pyarrow==19.0.1",
24
+ "pymupdf==1.25.3",
25
+ "python-dotenv>=1.0.1",
26
+ "python-pptx==1.0.2",
27
+ "ragas==0.2.10",
28
+ "sentence-transformers==3.4.1",
29
+ "unstructured==0.14.8",
30
+ "websockets>=15.0",
31
+ "fpdf==1.7.2",
32
+ ]
setup.cfg ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ [options]
2
+ packages = find:
3
+
4
+ [options.packages.find]
5
+ exclude =
6
+ References*
7
+ data*
8
+ data_sources*
9
+ persona_configs*
utils/__init__.py ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # This file can be empty. Its presence tells Python that the utils directory should be treated as a package, allowing us to import modules from it.
2
+
3
+ """
4
+ Utilities for InsightFlow AI
5
+ """
6
+
7
+ # Import persona system for easier access
8
+ from utils.persona import PersonaFactory, PersonaReasoning
utils/persona/__init__.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Persona system for InsightFlow AI.
3
+
4
+ This module provides the two-tier persona system with:
5
+ 1. Base persona types (Analytical, Scientific, Philosophical, etc.)
6
+ 2. Specific personalities (Holmes, Feynman, Fry)
7
+ """
8
+
9
+ # Import key classes for easier access
10
+ from utils.persona.base import PersonaReasoning, PersonaFactory
11
+ from utils.persona.impl import (
12
+ LLMPersonaReasoning,
13
+ AnalyticalReasoning,
14
+ ScientificReasoning,
15
+ PhilosophicalReasoning,
16
+ FactualReasoning,
17
+ MetaphoricalReasoning,
18
+ FuturisticReasoning,
19
+ HolmesReasoning,
20
+ FeynmanReasoning,
21
+ FryReasoning
22
+ )
utils/persona/base.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Base classes for the persona system.
3
+ """
4
+
5
+ import json
6
+ import os
7
+ from abc import ABC, abstractmethod
8
+ from typing import Dict, List, Optional, Any
9
+ from langchain_core.documents import Document
10
+
11
+ class PersonaReasoning(ABC):
12
+ """Base class for all persona reasoning types"""
13
+
14
+ def __init__(self, config: Dict[str, Any]):
15
+ self.config = config
16
+ self.id = config.get("id")
17
+ self.name = config.get("name")
18
+ self.traits = config.get("traits", [])
19
+ self.system_prompt = config.get("system_prompt", "")
20
+ self.examples = config.get("examples", [])
21
+ self.is_personality = not config.get("is_persona_type", True)
22
+
23
+ @abstractmethod
24
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
25
+ """Generate a perspective response based on query and optional context"""
26
+ pass
27
+
28
+ def get_system_prompt(self) -> str:
29
+ """Get the system prompt for this persona"""
30
+ return self.system_prompt
31
+
32
+ def get_examples(self) -> List[str]:
33
+ """Get example responses for this persona"""
34
+ return self.examples
35
+
36
+ class PersonaFactory:
37
+ """Factory for creating persona instances from config files"""
38
+
39
+ def __init__(self, config_dir="persona_configs"):
40
+ self.config_dir = config_dir
41
+ self.configs = {}
42
+ self.load_configs()
43
+
44
+ def load_configs(self):
45
+ """Load all JSON config files"""
46
+ if not os.path.exists(self.config_dir):
47
+ print(f"Warning: Config directory {self.config_dir} not found")
48
+ return
49
+
50
+ for filename in os.listdir(self.config_dir):
51
+ if filename.endswith(".json"):
52
+ try:
53
+ with open(os.path.join(self.config_dir, filename), "r") as f:
54
+ config = json.load(f)
55
+ if "id" in config:
56
+ self.configs[config["id"]] = config
57
+ except Exception as e:
58
+ print(f"Error loading config file {filename}: {e}")
59
+
60
+ def get_config(self, persona_id: str) -> Optional[Dict[str, Any]]:
61
+ """Get config for a persona"""
62
+ return self.configs.get(persona_id)
63
+
64
+ def get_available_personas(self) -> List[Dict[str, Any]]:
65
+ """Get list of all available personas with basic info"""
66
+ result = []
67
+ for persona_id, config in self.configs.items():
68
+ result.append({
69
+ "id": persona_id,
70
+ "name": config.get("name", persona_id.capitalize()),
71
+ "description": config.get("description", ""),
72
+ "is_persona_type": config.get("is_persona_type", True),
73
+ "parent_type": config.get("parent_type", "")
74
+ })
75
+ return result
76
+
77
+ def create_persona(self, persona_id: str) -> Optional[PersonaReasoning]:
78
+ """Create a persona instance based on ID"""
79
+ config = self.get_config(persona_id)
80
+ if not config:
81
+ return None
82
+
83
+ # Lazily import implementations to avoid circular imports
84
+ try:
85
+ if config.get("is_persona_type", True):
86
+ # This is a persona type
87
+ persona_type = config.get("type")
88
+ if persona_type == "analytical":
89
+ from .impl import AnalyticalReasoning
90
+ return AnalyticalReasoning(config)
91
+ elif persona_type == "scientific":
92
+ from .impl import ScientificReasoning
93
+ return ScientificReasoning(config)
94
+ elif persona_type == "philosophical":
95
+ from .impl import PhilosophicalReasoning
96
+ return PhilosophicalReasoning(config)
97
+ elif persona_type == "factual":
98
+ from .impl import FactualReasoning
99
+ return FactualReasoning(config)
100
+ elif persona_type == "metaphorical":
101
+ from .impl import MetaphoricalReasoning
102
+ return MetaphoricalReasoning(config)
103
+ elif persona_type == "futuristic":
104
+ from .impl import FuturisticReasoning
105
+ return FuturisticReasoning(config)
106
+ else:
107
+ # This is a personality
108
+ parent_type = config.get("parent_type")
109
+ parent_config = self.get_config(parent_type)
110
+ if parent_config:
111
+ if persona_id == "holmes":
112
+ from .impl import HolmesReasoning
113
+ return HolmesReasoning(config, parent_config)
114
+ elif persona_id == "feynman":
115
+ from .impl import FeynmanReasoning
116
+ return FeynmanReasoning(config, parent_config)
117
+ elif persona_id == "fry":
118
+ from .impl import FryReasoning
119
+ return FryReasoning(config, parent_config)
120
+ except Exception as e:
121
+ print(f"Error creating persona {persona_id}: {e}")
122
+
123
+ return None
utils/persona/impl.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Implementations of different persona reasoning types.
3
+ """
4
+
5
+ from .base import PersonaReasoning
6
+ from langchain_core.messages import SystemMessage, HumanMessage
7
+ from langchain_openai import ChatOpenAI
8
+ from typing import Dict, List, Optional, Any
9
+ from langchain_core.documents import Document
10
+
11
+ class LLMPersonaReasoning(PersonaReasoning):
12
+ """Base implementation that uses LLM to generate responses"""
13
+
14
+ def __init__(self, config: Dict[str, Any], llm=None):
15
+ super().__init__(config)
16
+ # Use shared LLM instance if provided, otherwise create one
17
+ self.llm = llm or ChatOpenAI(model="gpt-4o-mini", temperature=0.4)
18
+
19
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
20
+ """Generate perspective using LLM with persona's system prompt"""
21
+
22
+ # Build prompt with context if available
23
+ context_text = ""
24
+ if context and len(context) > 0:
25
+ context_text = "\n\nRelevant information:\n" + "\n".join([doc.page_content for doc in context])
26
+
27
+ # Build messages
28
+ messages = [
29
+ SystemMessage(content=self.system_prompt),
30
+ HumanMessage(content=f"Query: {query}{context_text}\n\nPlease provide your perspective on this query based on your unique approach.")
31
+ ]
32
+
33
+ # Get response from LLM
34
+ response = self.llm.invoke(messages)
35
+ return response.content
36
+
37
+ # Specialized implementations for each persona type
38
+ class AnalyticalReasoning(LLMPersonaReasoning):
39
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
40
+ """Generate perspective using analytical reasoning approach"""
41
+ # For MVP, we'll use the base implementation
42
+ # In a full implementation, add analytical-specific enhancements
43
+ return super().generate_perspective(query, context)
44
+
45
+ class ScientificReasoning(LLMPersonaReasoning):
46
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
47
+ """Generate perspective using scientific reasoning approach"""
48
+ # For MVP, we'll use the base implementation
49
+ # In a full implementation, add scientific-specific enhancements
50
+ return super().generate_perspective(query, context)
51
+
52
+ class PhilosophicalReasoning(LLMPersonaReasoning):
53
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
54
+ """Generate perspective using philosophical reasoning approach"""
55
+ # For MVP, we'll use the base implementation
56
+ # In a full implementation, add philosophical-specific enhancements
57
+ return super().generate_perspective(query, context)
58
+
59
+ class FactualReasoning(LLMPersonaReasoning):
60
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
61
+ """Generate perspective using factual reasoning approach"""
62
+ # For MVP, we'll use the base implementation
63
+ # In a full implementation, add factual-specific enhancements
64
+ return super().generate_perspective(query, context)
65
+
66
+ class MetaphoricalReasoning(LLMPersonaReasoning):
67
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
68
+ """Generate perspective using metaphorical reasoning approach"""
69
+ # For MVP, we'll use the base implementation
70
+ # In a full implementation, add metaphorical-specific enhancements
71
+ return super().generate_perspective(query, context)
72
+
73
+ class FuturisticReasoning(LLMPersonaReasoning):
74
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
75
+ """Generate perspective using futuristic reasoning approach"""
76
+ # For MVP, we'll use the base implementation
77
+ # In a full implementation, add futuristic-specific enhancements
78
+ return super().generate_perspective(query, context)
79
+
80
+ # Personality implementations (second tier of two-tier system)
81
+ class HolmesReasoning(LLMPersonaReasoning):
82
+ """Sherlock Holmes personality implementation"""
83
+
84
+ def __init__(self, config: Dict[str, Any], parent_config: Dict[str, Any], llm=None):
85
+ super().__init__(config, llm)
86
+ self.parent_config = parent_config
87
+
88
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
89
+ """Generate perspective in Sherlock Holmes' style"""
90
+ # For MVP, we'll use the base implementation with Holmes' system prompt
91
+ # In a full implementation, add Holmes-specific reasoning patterns
92
+ return super().generate_perspective(query, context)
93
+
94
+ class FeynmanReasoning(LLMPersonaReasoning):
95
+ """Richard Feynman personality implementation"""
96
+
97
+ def __init__(self, config: Dict[str, Any], parent_config: Dict[str, Any], llm=None):
98
+ super().__init__(config, llm)
99
+ self.parent_config = parent_config
100
+
101
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
102
+ """Generate perspective in Richard Feynman's style"""
103
+ # For MVP, we'll use the base implementation with Feynman's system prompt
104
+ # In a full implementation, add Feynman-specific reasoning patterns
105
+ return super().generate_perspective(query, context)
106
+
107
+ class FryReasoning(LLMPersonaReasoning):
108
+ """Hannah Fry personality implementation"""
109
+
110
+ def __init__(self, config: Dict[str, Any], parent_config: Dict[str, Any], llm=None):
111
+ super().__init__(config, llm)
112
+ self.parent_config = parent_config
113
+
114
+ def generate_perspective(self, query: str, context: Optional[List[Document]] = None) -> str:
115
+ """Generate perspective in Hannah Fry's style"""
116
+ # For MVP, we'll use the base implementation with Fry's system prompt
117
+ # In a full implementation, add Fry-specific reasoning patterns
118
+ return super().generate_perspective(query, context)