Spaces:
Running
on
Zero
Running
on
Zero
Commit History
Updated pyproject.toml file with requirements from requirements_no_local.txt
6191d21
Reverted requirements files to previous direct wheel links
f4c57ae
Updated git clone reference
901fd0e
Added deduplication with LLM functionality. Minor package updates. Updated installation documentation.
6f3d42c
Enhanced user interface and documentation for topic extraction tool. Updated file input labels for clarity, improved README instructions, and refined progress tracking in API calls. Added time tracking for LLM calls in output files.
9a0231a
Added examples for structured summaries and groups. Adapted functions for structured summaries. Simplified front tab GUI
5ed844b
Optimised prompts. Updated Gradio. Added example for zero shot topics. Added support for Granite 4 local model
9e8c029
No longer str capitalize summaries
3ee11fd
Corrected blank KV_QUANT_LEVEL values. Removed erroneous extract topics output
7ae3b47
Corrected KV quant value when not specified (set to None)
4cd2443
Corrected KV quantisation definition in config. Moved examples to top of screen under intro.
73188ff
Generally improved inference for low vram systems, unsloth usage improvements, updated packages, switched default local model to Qwen 3 4b
bd1a015
Upload example xlsx files with repo
bd19985
Updated output logging files. Updated examples and readme. Minor prompt adjustments and package updates
fff212b
Corrected reasoning setting
4998b3c
Revised intro and readme. Reasoning suffix setting simplified. All in one xlsx returns correct tables.
8161b79
Position of usage logs input corrected
6acf6b9
tracking model state through all_in_one_function
1a1a845
Corrected usage of stop strings, streaming
6154c1e
Added stop strings, optimised llama-cpp-python inference for streaming
6eaced0
Trying out inference with unsloth vs transformers
4d01a46
Added 'all-in-one' function. Corrected local model load when not loaded initially. Environment variables for max data rows and topics.
d6ff533
Added spaces import to llm_funcs
1b35214
Updated spaces GPU usage for load_model
a3bd98a
Added speculative decoding to transformers calls for gemma 3
7cadb40
It should now be possible to load the local model globally at the start to avoid repeated loading throughout the stages of topic extraction
fd02514
Added framework of support for Azure models (although untested)
138286c
Corrected misplaced example file reference. Added AWS Nova models to model list
5fa40a6
Minor changes to local inference
6f46742
Minor corrections to transformers inference
6797022
Further optimisations to transformers inference
0895a36
Minor fixes to llm_funcs
2214029
Enabled GPU-based local model inference with the transformers package
72d517c
Added possibility of adding examples quickly to the input files
8c54223
Enhanced app functionality by adding new UI elements for summary file management, put in Bedrock model toggle, and refined logging messages. Updated Dockerfile and requirements for better compatibility and added install guide to readme. Removed deprecated code and unnecessary comments.
ba1a951
Corrected backslash
aa08197
Allowed possibility to run all analysis steps in one click
2e33e29
Updated requirements. More generous spaces GPU timeouts
9b97b7b
Removed unnecessary print statements
6da6ac6
Loading llama_cpp inside load_model instead of on app load
aba68bf
Enhanced app functionality by adding new logging variables, refining file input options, and updating prompts for better user experience. Updated Dockerfile for improved environment setup and adjusted requirements for compatibility. Removed unnecessary print statements and added error handling in data loading functions.
714810a
Removed kv quantisation for CPU as cannot make consistent without flash attention
4f3fcbc
Aligned kv quantisation for CPU inference
933ac57
Optimised prompts for llama.cpp prompt caching
43fe323
Dockerfile now refers to prebuilt llama-cpp-python wheel
41f8cab
Corrected full_prompt save
d25e491
Minor fixes for Gemini, model calls. Updated Dockerfile for non-GPU systems
8ec0f3d
Merge pull request #22 from seanpedrick-case/pyproject_file
a3a7eae
unverified
Sean Pedrick-Case
commited on