{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# First let's do an import\n", "from dotenv import load_dotenv\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Next it's time to load the API keys into environment variables\n", "\n", "load_dotenv(override=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check the keys\n", "\n", "import os\n", "openRouter_api_key = os.getenv('OPENROUTER_API_KEY')\n", "\n", "if openRouter_api_key:\n", " print(f\"OpenRouter API Key exists and begins {openRouter_api_key[:8]}\")\n", "else:\n", " print(\"OpenRouter API Key not set - please head to the troubleshooting guide in the setup folder\")\n", " \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import requests\n", "\n", "# Set the model you want to use\n", "#MODEL = \"openai/gpt-4.1-nano\"\n", "MODEL = \"meta-llama/llama-3.3-8b-instruct:free\"\n", "#MODEL = \"openai/gpt-4.1-mini\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "chatHistory = []\n", "# This is a list that will hold the chat history" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Instead of using the OpenAI API, here I will use the OpenRouter API\n", "# This is a method that can be reused to chat with the OpenRouter API\n", "def chatWithOpenRouter(prompt):\n", "\n", " # here add the prommpt to the chat history\n", " chatHistory.append({\"role\": \"user\", \"content\": prompt})\n", "\n", " # specify the URL and headers for the OpenRouter API\n", " url = \"https://openrouter.ai/api/v1/chat/completions\"\n", " \n", " headers = {\n", " \"Authorization\": f\"Bearer {openRouter_api_key}\",\n", " \"Content-Type\": \"application/json\"\n", " }\n", "\n", " payload = {\n", " \"model\": MODEL,\n", " \"messages\":chatHistory\n", " }\n", "\n", " # make the POST request to the OpenRouter API\n", " response = requests.post(url, headers=headers, json=payload)\n", "\n", " # check if the response is successful\n", " # and return the response content\n", " if response.status_code == 200:\n", " print(f\"Row Response:\\n{response.json()}\")\n", "\n", " assistantResponse = response.json()['choices'][0]['message']['content']\n", " chatHistory.append({\"role\": \"assistant\", \"content\": assistantResponse})\n", " return f\"LLM response:\\n{assistantResponse}\"\n", " \n", " else:\n", " raise Exception(f\"Error: {response.status_code},\\n {response.text}\")\n", " \n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# message to use with chatWithOpenRouter function\n", "messages = \"What is 2+2?\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now let's make a call to the chatWithOpenRouter function\n", "response = chatWithOpenRouter(messages)\n", "print(response)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Trying with a question\n", "response = chatWithOpenRouter(question)\n", "print(response)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "message = response\n", "answer = chatWithOpenRouter(\"Solve the question: \"+message)\n", "print(answer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Congratulations!\n", "\n", "That was a small, simple step in the direction of Agentic AI, with your new environment!\n", "\n", "Next time things get more interesting..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n",
" ![]() | \n",
" \n",
" Exercise\n", " Now try this commercial application:\n", " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity. \n", " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution. \n", " Finally have 3 third LLM call propose the Agentic AI solution.\n", " \n", " | \n",
"