\n",
+ "\n",
+ "
\n",
+ " 
\n",
+ "\n",
+ "\n",
+ "
\n",
+ "

\n",
+ "

\n",
+ "

\n",
+ "
\n",
+ "\n",
+ "This
YOLOv5 🚀 notebook by
Ultralytics presents simple train, validate and predict examples to help start your AI adventure.
We hope that the resources in this notebook will help you get the most out of YOLOv5. Please browse the YOLOv5
Docs for details, raise an issue on
GitHub for support, and join our
Discord community for questions and discussions!\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7mGmQbAO5pQb"
+ },
+ "source": [
+ "# Setup\n",
+ "\n",
+ "Clone GitHub [repository](https://github.com/ultralytics/yolov5), install [dependencies](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) and check PyTorch and GPU."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "wbvMlHd_QwMG",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "e8225db4-e61d-4640-8b1f-8bfce3331cea"
+ },
+ "source": [
+ "!git clone https://github.com/ultralytics/yolov5 # clone\n",
+ "%cd yolov5\n",
+ "%pip install -qr requirements.txt comet_ml # install\n",
+ "\n",
+ "import torch\n",
+ "import utils\n",
+ "display = utils.notebook_init() # checks"
+ ],
+ "execution_count": 1,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "YOLOv5 🚀 v7.0-136-g71244ae Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Setup complete ✅ (2 CPUs, 12.7 GB RAM, 23.3/166.8 GB disk)\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4JnkELT0cIJg"
+ },
+ "source": [
+ "# 1. Detect\n",
+ "\n",
+ "`detect.py` runs YOLOv5 inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:\n",
+ "\n",
+ "```shell\n",
+ "python detect.py --source 0 # webcam\n",
+ " img.jpg # image \n",
+ " vid.mp4 # video\n",
+ " screen # screenshot\n",
+ " path/ # directory\n",
+ " 'path/*.jpg' # glob\n",
+ " 'https://youtu.be/Zgi9g1ksQHc' # YouTube\n",
+ " 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "zR9ZbuQCH7FX",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "284ef04b-1596-412f-88f6-948828dd2b49"
+ },
+ "source": [
+ "!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images\n",
+ "# display.Image(filename='runs/detect/exp/zidane.jpg', width=600)"
+ ],
+ "execution_count": 13,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\u001b[34m\u001b[1mdetect: \u001b[0mweights=['yolov5s.pt'], source=data/images, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1\n",
+ "YOLOv5 🚀 v7.0-136-g71244ae Python-3.9.16 torch-2.0.0+cu118 CUDA:0 (Tesla T4, 15102MiB)\n",
+ "\n",
+ "Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt to yolov5s.pt...\n",
+ "100% 14.1M/14.1M [00:00<00:00, 24.5MB/s]\n",
+ "\n",
+ "Fusing layers... \n",
+ "YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients\n",
+ "image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 41.5ms\n",
+ "image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, 60.0ms\n",
+ "Speed: 0.5ms pre-process, 50.8ms inference, 37.7ms NMS per image at shape (1, 3, 640, 640)\n",
+ "Results saved to \u001b[1mruns/detect/exp\u001b[0m\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "hkAzDWJ7cWTr"
+ },
+ "source": [
+ " \n",
+ "