--- title: DeepSeek-R1-Distill-Qwen-14B emoji: 🤖 colorFrom: indigo colorTo: purple sdk: gradio sdk_version: 5.47.0 #python_version: 3.12 app_file: app.py pinned: false --- # Finance Space Gradio app with two tabs: - Price Prediction - Equity Research Report Model: `tarun7r/Finance-Llama-8B` (fine-tuned Llama-3.1-8B for finance). Runs on GPU when available. Data sources: - Finnhub (candles, profile, news) — set `FINNHUB_API_KEY` - Alpha Vantage via RapidAPI — set `RAPIDAPI_KEY`, host `alpha-vantage.p.rapidapi.com` ## Setup ```bash pip install -r requirements.txt # set env vars $env:FINNHUB_API_KEY="..." # PowerShell $env:RAPIDAPI_KEY="..." python app.py ``` In Hugging Face Spaces (Python 3.12 GPU), just add the secrets and run `app.py`. ## Notes - If Finnhub key is absent, the app falls back to Alpha Vantage daily adjusted via RapidAPI. - Prompts are in `prompts.py`. Adjust temperature/top_p in `app.py` as needed. ## ZeroGPU (no local PyTorch) setup - Default path uses Hugging Face Inference API to run `tarun7r/Finance-Llama-8B` remotely. - Set secrets on your Space: - `HF_TOKEN` (recommended; required for higher rate limits) - `FINNHUB_API_KEY` - `RAPIDAPI_KEY` - Environment toggle: - `USE_HF_API=1` (default) → remote inference (works on ZeroGPU) - `USE_HF_API=0` → try local inference (requires GPU/torch) - If you later switch to GPU runtime, uncomment `torch==2.3.1` in `requirements.txt` and set `USE_HF_API=0` if you prefer local inference.