llm_topic_modelling / requirements.txt
seanpedrickcase's picture
Added model compatibility for OpenAI and Azure endpoints. Added some Bedrock models, now compatible with thinking models
3085585
# Note that this requirements file is optimised for Hugging Face spaces / Python 3.10. Please use requirements_no_local.txt for installation without local model inference (simplest approach to get going). Please use requirements_cpu.txt for CPU instances and requirements_gpu.txt for GPU instances using Python 3.11
pandas==2.3.3
gradio==5.49.1
transformers==4.56.0
spaces==0.42.1
boto3==1.40.48
pyarrow==21.0.0
openpyxl==3.1.5
markdown==3.7
tabulate==0.9.0
lxml==5.3.0
google-genai==1.33.0
openai==2.2.0
html5lib==1.1
beautifulsoup4==4.12.3
rapidfuzz==3.13.0
python-dotenv==1.1.0
# GPU (for huggingface instance)
# Torch/Unsloth and llama-cpp-python
# Latest compatible with CUDA 12.4
torch==2.6.0 --extra-index-url https://download.pytorch.org/whl/cu124
unsloth[cu124-torch260]==2025.9.4
unsloth_zoo==2025.9.5
timm==1.0.19
# llama-cpp-python direct wheel link for GPU compatible version 3.16 for use with Python 3.10 and Hugging Face
https://github.com/abetlen/llama-cpp-python/releases/download/v0.3.16-cu124/llama_cpp_python-0.3.16-cp310-cp310-linux_x86_64.whl