leavoigt's picture
Update README.md
4ca7382 verified
|
raw
history blame
3.29 kB
metadata
title: Climate Vulnerability Analysis
emoji: 🌡️
colorFrom: green
colorTo: gray
sdk: docker
app_file: app.py
app_port: 8501
pinned: false
short_description: Uncover and summarize vulnerable groups findings
authors:
  - user: https://huggingface.co/mtyrrell
  - user: https://huggingface.co/leavoigt
  - user: https://huggingface.co/TeresaK

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

Technical Documentation of the system in accordance with EU AI Act.

System Name: Climate Vulnerability App

Provider / Supplier: GIZ Data Lab & Data Service Center

As of: July 2025

  1. General Description of the System The Climate Vulnerability App is an AI-powered tool to quickly retrieve and summarize relevant information on marginalised groups from (climate) policy documents, in order to enable users to get a broad overview of the extent to which different marginalised groups are represented in policies. This tool leverages fine-tuned transformer models to classify references related to pre-determined marginalised groups and an LLM of choice to summarize the most important information that has been identified.

  2. Model's Used

    Text Classification:

    Generative LLM used for summaries:

  3. Model Training Data:

    • The data used to fine-tune the text classification model can be found here: vulnerability_training_data_full
    • The training data has been collected by human annotators that are expert in their fields
    • The data does not contain any known bias, however some classes perform better than others (see dataset card) and risk of potential bias can never be fully excluded.
  4. System Limitations and Non-Purposes

    • The system is designed to provide a quick overview of the most relevant information on marginalised groups in climate policy
    • The system is NOT designed to give an in-depth analysis of the document. Output may always be incomplete or falsly classified and should ALWAYS be reviewed by a human.
    • The system does not make autonomous decisions but just provides information.
    • No personal data of users is being processed.
    • Results are intended for orientation only – not for legal or political advice.
  5. Transparency Towards Users

    • The user interface clearly indicates the use of a generative AI model.
  6. Monitoring, Feedback, and Incident Reporting

    • Technical development is carried out by the GIZ Data Service Center.
    • Please reach out through the contact details provided below, if there are any issues or feedback.
  7. Contact: For any questions, please contact us via dataservicecenter@giz.de