Spaces:
Running
on
Zero
title: Large language model topic modelling
emoji: π
colorFrom: purple
colorTo: yellow
sdk: gradio
app_file: app.py
pinned: true
license: agpl-3.0
Large language model topic modelling
Extract topics and summarise outputs using Large Language Models (LLMs, Gemma 3 4b/GPT-OSS 20b if local (see tools/config.py to modify), Gemini 2.5, or Bedrock models (e.g. Claude 3 Haiku, Claude Sonnet 3.7, Nova models). The app will query the LLM with batches of responses to produce summary tables, which are then compared iteratively to output a table with the general topics, subtopics, topic sentiment, and relevant text rows related to them. The prompts are designed for topic modelling public consultations, but they can be adapted to different contexts (see the LLM settings tab to modify).
Instructions on use can be found in the README.md file. Try it out with this dummy development consultation dataset, which you can also try with zero-shot topics. Try also this dummy case notes dataset.
You can use an AWS Bedrock model (paid), or Gemini (a free API for Flash). The use of Gemini requires an API key. To set up your own Gemini API key, go here.
NOTE: Large language models are not 100% accurate and may produce biased or harmful outputs. All outputs from this app absolutely need to be checked by a human to check for harmful outputs, hallucinations, and accuracy.
Basic use:
- On the front page, choose your model for inference. Gemma 3/GPT-OSS will use 'on-device' inference. Calls to Gemini or AWS will require an API key that can be input on the 'LLM and topic extraction' page.
- Upload a csv/xlsx/parquet file containing at least one open text column.
- Select the relevant open text column from the dropdown.
- If you have your own suggested (zero shot) topics, upload this (see examples folder for an example file)
- Write a one sentence description of the consultation/context of the open text.
- Click 'All in one - Extract topics, deduplicate, and summarise'. This will run through the whole analysis process from topic extraction, to topic deduplication, to topic-level and overall summaries.
- A summary xlsx file workbook will be created on the front page in the box 'Overall summary xlsx file'. This will combine all the results from the different processes into one workbook.
Installation guide
Here is a step-by-step guide to clone the repository, create a virtual environment, and install dependencies from a relevant requirements file. This guide assumes you have Git and Python 3.11 installed.
Step 1: Clone the Git Repository
First, you need to copy the project files to your local machine. Navigate to the directory where you want to store the project using the cd (change directory) command. Then, use git clone with the repository's URL.
Clone the repo:
git clone https://github.com/example-user/example-repo.gitReplace the URL with your repository's URL.
Navigate into the new project folder:
cd example-repo
Step 2: Create and Activate a Virtual Environment
A virtual environment is a self-contained directory that holds a specific Python interpreter and its own set of installed packages. This is crucial for isolating your project's dependencies.
Create the virtual environment: We'll use Python's built-in
venvmodule. It's common practice to name the environment folder.venv.python -m venv .venvThis command tells Python to create a new virtual environment in a folder named
.venv.Activate the environment: You must "activate" the environment to start using it. The command differs based on your operating system and shell.
On macOS / Linux (bash/zsh):
source .venv/bin/activateOn Windows (Command Prompt):
.\.venv\Scripts\activateOn Windows (PowerShell):
.\.venv\Scripts\Activate.ps1
You'll know it's active because your command prompt will be prefixed with
(.venv).
Step 3: Install Dependencies
Now that your virtual environment is active, you can install all the required packages listed in the relevant project requirements.txt file using pip.
- Choose the relevant requirements file
Llama-cpp-python version 3.16 is compatible with Gemma 3 and GPT-OSS models, but does not at the time of writing have relevant wheels for CPU inference or for Windows. A sister repository contains llama-cpp-python 3.16 wheels for Python version 3.11/10 so that users can avoid having to build the package from source. If you prefer to build from source, then please refer to the llama-cpp-python documentation here. I also have a guide to building the package on a Windows system here.
The repo provides several requirements files that are relevant for different situations. I would advise using requirements_gpu.txt for GPU environments, and requirements_cpu.txt for CPU environments:
- requirements_no_local: Can be used to install the app without local model inference for a more lightweight installation.
- requirements_gpu.txt: Used for Python 3.11 GPU-enabled environments. Uncomment the last requirement under 'Windows' for Windows compatibility (CUDA 12.4).
- requirements_cpu.txt: Used for Python 3.11 CPU-only environments. Uncomment the last requirement under 'Windows' for Windows compatibility.
- requirements.txt: Used for the Python 3.10 GPU-enabled environment on Hugging Face spaces (CUDA 12.4).
- Install packages from the requirements file:
This command reads every package name listed in the file and installs it into yourpip install -r requirements_gpu.txt.venvenvironment.
You're all set! β Your project is cloned, and all dependencies are installed in an isolated environment.
When you are finished working, you can leave the virtual environment by simply typing:
deactivate
Step 4: Verify CUDA compatibility (if using a GPU environment)
Install the relevant toolkit for CUDA 12.4 from here: https://developer.nvidia.com/cuda-12-4-0-download-archive
Restart your computer
Ensure you have the latest drivers for your NVIDIA GPU. Check your current version and memory availability by running nvidia-smi
In command line, CUDA compatibility can be checked by running nvcc --version
Step 5: Ensure you have compatible NVIDIA drivers
Make sure you have the latest NVIDIA drivers installed on your system for your GPU (be careful in particular if using WSL that you have drivers compatible with this). Official drivers can be found here: https://www.nvidia.com/en-us/drivers
Current drivers can be found by running nvidia-smi in command line
Step 6: Run the app
Go to the app project directory. Run python app.py
Step 7: (optional) change default configuration
A number of configuration options can be seen the tools/config.py file. You can either pass in these variables as environment variables, or you can create a file in config/app_config.env to read this into the app on initialisation.