{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Welcome to the start of your adventure in Agentic AI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#ff7800;\">Are you ready for action??</h2>\n",
    "            <span style=\"color:#ff7800;\">Have you completed all the setup steps in the <a href=\"../setup/\">setup</a> folder?<br/>\n",
    "            Have you checked out the guides in the <a href=\"../guides/01_intro.ipynb\">guides</a> folder?<br/>\n",
    "            Well in that case, you're ready!!\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#00bfff;\">Treat these labs as a resource</h2>\n",
    "            <span style=\"color:#00bfff;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations. Consider this like an interactive book that accompanies the lectures.\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### And please do remember to contact me if I can help\n",
    "\n",
    "And I love to connect: https://www.linkedin.com/in/eddonner/\n",
    "\n",
    "\n",
    "### New to Notebooks like this one? Head over to the guides folder!\n",
    "\n",
    "Otherwise:\n",
    "1. Click where it says \"Select Kernel\" near the top right, and select the option called `.venv (Python 3.12.9)` or similar, which should be the first choice or the most prominent choice.\n",
    "2. Click in each \"cell\" below, starting with the cell immediately below this text, and press Shift+Enter to run\n",
    "3. Enjoy!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# First let's do an import\n",
    "from dotenv import load_dotenv\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Next it's time to load the API keys into environment variables\n",
    "\n",
    "load_dotenv(override=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check the keys\n",
    "\n",
    "import os\n",
    "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
    "\n",
    "if openai_api_key:\n",
    "    print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
    "else:\n",
    "    print(\"OpenAI API Key not set - please head to the troubleshooting guide in the guides folder\")\n",
    "    \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# And now - the all important import statement\n",
    "# If you get an import error - head over to troubleshooting guide\n",
    "\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# And now we'll create an instance of the OpenAI class\n",
    "# If you're not sure what it means to create an instance of a class - head over to the guides folder!\n",
    "# If you get a NameError - head over to the guides folder to learn about NameErrors\n",
    "\n",
    "openai = OpenAI()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a list of messages in the familiar OpenAI format\n",
    "\n",
    "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# And now call it! Any problems, head to the troubleshooting guide\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "print(response.choices[0].message.content)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "# And now - let's ask for a question:\n",
    "\n",
    "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n",
    "messages = [{\"role\": \"user\", \"content\": question}]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ask it\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "question = response.choices[0].message.content\n",
    "\n",
    "print(question)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# form a new messages list\n",
    "messages = [{\"role\": \"user\", \"content\": question}]\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Ask it again\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "answer = response.choices[0].message.content\n",
    "print(answer)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import Markdown, display\n",
    "\n",
    "display(Markdown(answer))\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Congratulations!\n",
    "\n",
    "That was a small, simple step in the direction of Agentic AI, with your new environment!\n",
    "\n",
    "Next time things get more interesting..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<table style=\"margin: 0; text-align: left; width:100%\">\n",
    "    <tr>\n",
    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
    "            <img src=\"../../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
    "        </td>\n",
    "        <td>\n",
    "            <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
    "            <span style=\"color:#ff7800;\">Now try this commercial application:<br/>\n",
    "            First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.<br/>\n",
    "            Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.<br/>\n",
    "            Finally have 3 third LLM call propose the Agentic AI solution.\n",
    "            </span>\n",
    "        </td>\n",
    "    </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```\n",
    "# First create the messages:\n",
    "\n",
    "messages = [{\"role\": \"user\", \"content\": \"Something here\"}]\n",
    "\n",
    "# Then make the first call:\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "# Then read the business idea:\n",
    "\n",
    "business_idea = response.choices[0].message.content\n",
    "\n",
    "# print(business_idea) \n",
    "\n",
    "# And repeat!\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# First exercice : ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.\n",
    "\n",
    "# First create the messages:\n",
    "query = \"Pick a business area that might be worth exploring for an Agentic AI opportunity.\"\n",
    "messages = [{\"role\": \"user\", \"content\": query}]\n",
    "\n",
    "# Then make the first call:\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "# Then read the business idea:\n",
    "\n",
    "business_idea = response.choices[0].message.content\n",
    "\n",
    "# print(business_idea) \n",
    "\n",
    "# from IPython.display import Markdown, display\n",
    "\n",
    "display(Markdown(business_idea))\n",
    "\n",
    "# And repeat!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Second exercice: Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.\n",
    "\n",
    "# First create the messages:\n",
    "\n",
    "prompt = f\"Please present a pain-point in that industry, something challenging that might be ripe for an Agentic solution for it in that industry: {business_idea}\"\n",
    "messages = [{\"role\": \"user\", \"content\": prompt}]\n",
    "\n",
    "# Then make the first call:\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "# Then read the business idea:\n",
    "\n",
    "painpoint = response.choices[0].message.content\n",
    " \n",
    "# print(painpoint) \n",
    "display(Markdown(painpoint))\n",
    "\n",
    "# And repeat!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# third exercice: Finally have 3 third LLM call propose the Agentic AI solution.\n",
    "\n",
    "# First create the messages:\n",
    "\n",
    "promptEx3 = f\"Please come up with a proposal for the Agentic AI solution to address this business painpoint:  {painpoint}\"\n",
    "messages = [{\"role\": \"user\", \"content\": promptEx3}]\n",
    "\n",
    "# Then make the first call:\n",
    "\n",
    "response = openai.chat.completions.create(\n",
    "    model=\"gpt-4o-mini\",\n",
    "    messages=messages\n",
    ")\n",
    "\n",
    "# Then read the business idea:\n",
    "\n",
    "ex3_answer=response.choices[0].message.content\n",
    "# print(painpoint) \n",
    "display(Markdown(ex3_answer))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}