What is Planaura?
Planaura is a collection of Canadian geospatial foundation models produced at Canada Centre for Mapping and Earth Observation at Natural Resources Canada.
Planaura is trained with satellite imagery mainly from two sources of harmonized landsat and sentinel (HLS) (https://hls.gsfc.nasa.gov/) and sentinel-2 (S2) (https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-2). The training data was selected to provide national coverage over the vast Canadian landscape through 10 years from 2015 to 2024. The training datasets were mainly acquired over periods of June to September in each year, hence making Planaura best performing over spring/summer seasons for most of Canada.
Two versions of Planaura are currently publicly available through HuggingFace.
Planaura_HLS is best suited to be used with HLS imagery at resolutions of 30 meters.
Planaura_S2 is best suited to be used with S2 imagery at resolutions of 10-20 meters.
While Planaura_HLS generalizes well on either of the data sources (S2 or HLS), the Planaura_S2 model was particularly fine-tuned with higher-resolution data, and we have noticed that this specialized model performs slightly better on 10-meter imagery than the Planaura_HLS and is able to extract finer levels of change when used for change detection.
Inputs to the model:
- In bi-temporal mode (num_frames=2): Two satellite images of the same location taken at two different epochs.
- In static mode (num_frames=1): One satellite image.
- Source of imagery can be either Sentinel-2 imagery or Harmonized Landsat and Sentinel imagery.
- Expected spectral bands of each image (in the order provided below):
- band 0: blue - B02
- band 1: green - B03
- band 2: red - B04
- band 3: near infrared with central wavelength of 865 nm - B8A
- band 4: short wave infrared with central wavelength of 1610 nm - B11
- band 5: short wave infrared with central wavelength of 2190 nm - B12
Planaura contains a two-epoch (bi-temporal) encoder that facilitates:
- Embedding images into feature maps that will be able to represent the descriptive features of each image and can subsequently be used for other tasks like clustering or classification.
- Calculating the intensity of change between two images representing the actual contextual changes and not naively the changes of spectral values.
Usage
We provide source codes, scripts for inference from Planaura models, examples, and detailed instructions at NRCan/planaura
Citation
If you use Planaura in your research, please cite this repository:
@misc{Planaura_Code_2025,
author = {Shahbazi, Mozhdeh and Sokolov, Mikhail},
title = {Planaura - Canadian Geospatial Foundation Models},
year = {2025},
publisher = {GitHub},
url = {https://github.com/NRCan/planaura}
}
@misc{Planaura_Model_2025,
author = {Natural Resources Canada},
title = {Planaura - Canadian Geospatial Foundation Models},
year = {2025},
publisher = {GitHub},
url = {https://huggingface.co/NRCan/Planaura-1.0}
}
References
See CONTRIBUTORS.md for a full list of authors and contributors.
The Prithvi-100M model was used as the starting point to create and train Planaura, to adapt it to Canadian landscape and to make it suitable for bi-epoch multiscale change detection.
@article{Prithvi-100M-preprint,
author = {Jakubik, Johannes and Roy, Sujit and Phillips, C. E. and Fraccaro, Paolo and Godwin, Denys and Zadrozny, Bianca and Szwarcman, Daniela and Gomes, Carlos and Nyirjesy, Gabby and Edwards, Blair and Kimura, Daiki and Simumba, Naomi and Chu, Linsong and Mukkavilli, S. Karthik and Lambhate, Devyani and Das, Kamal and Bangalore, Ranjini and Oliveira, Dario and Muszynski, Michal and Ankur, Kumar and Ramasubramanian, Muthukumaran and Gurung, Iksha and Khallaghi, Sam and Li, Hanxi (Steve) and Cecil, Michael and Ahmadi, Maryam and Kordi, Fatemeh and Alemohammad, Hamed and Maskey, Manil and Ganti, Raghu and Weldemariam, Kommy and Ramachandran, Rahul},
month = oct,
title = {{Foundation Models for Generalist Geospatial Artificial Intelligence}},
journal = {Preprint Available on arxiv:2310.18660},
year = {2023}
}
@misc{Prithvi-100M,
author = {Jakubik, Johannes and Chu, Linsong and Fraccaro, Paolo and Gomes, Carlos and Nyirjesy, Gabby and Bangalore, Ranjini and Lambhate, Devyani and Das, Kamal and Oliveira Borges, Dario and Kimura, Daiki and Simumba, Naomi and Szwarcman, Daniela and Muszynski, Michal and Weldemariam, Kommy and Zadrozny, Bianca and Ganti, Raghu and Costa, Carlos and Edwards, Blair & Watson, Campbell and Mukkavilli, Karthik and Schmude, Johannes & Hamann, Hendrik and Robert, Parkin and Roy, Sujit and Phillips, Christopher and Ankur, Kumar and Ramasubramanian, Muthukumaran and Gurung, Iksha and Leong, Wei Ji and Avery, Ryan and Ramachandran, Rahul and Maskey, Manil and Olofossen, Pontus and Fancher, Elizabeth and Lee, Tsengdar and Murphy, Kevin and Duffy, Dan and Little, Mike and Alemohammad, Hamed and Cecil, Michael and Li, Steve and Khallaghi, Sam and Godwin, Denys and Ahmadi, Maryam and Kordi, Fatemeh and Saux, Bertrand and Pastick, Neal and Doucette, Peter and Fleckenstein, Rylie and Luanga, Dalton and Corvin, Alex and Granger, Erwan},
doi = {10.57967/hf/0952},
month = aug,
title = {{Prithvi-100M}},
repository-code = {https://github.com/NASA-IMPACT/hls-foundation-os},
year = {2023}
}
@misc{ibm_nasa_geospatial_2023,
author = { {IBM NASA Geospatial} },
title = { Prithvi-100M (Revision 489bb56) },
year = 2023,
url = { https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M },
doi = { 10.57967/hf/0952 },
publisher = { Hugging Face }
}
The idea of using cosine distance between auto-embeddings for change recognition comes from the following publication:
@article{ruuvzivcka2022ravaen,
title={RaVEn: unsupervised change detection of extreme events using ML on-board satellites},
author={R{\uu}{\v{z}}i{\v{c}}ka, V{\'\i}t and Vaughan, Anna and De Martini, Daniele and Fulton, James and Salvatelli, Valentina and Bridges, Chris and Mateo-Garcia, Gonzalo and Zantedeschi, Valentina},
journal={Scientific reports},
volume={12},
number={1},
pages={16939},
year={2022},
publisher={Nature Publishing Group UK London}
}
Image sources used for training Planaura:
HLSL30 (Landsat 8 OLI) v2.0:
HLS Operational Land Imager Vegetation Indices Daily Global 30m v2.0.,
distributed by NASA EOSDIS Land Processes Distributed Active Archive Center
HLSS30 (Sentinel-2 MSI) v2.0:
HLS Sentinel-2 Multi-spectral Instrument Vegetation Indices Daily Global 30m v2.0.,
distributed by NASA EOSDIS Land Processes Distributed Active Archive Center
Copernicus Sentinel-2 data
Data retrieved from the Sentinel Hub
Model tree for NRCan/Planaura-1.0
Base model
ibm-nasa-geospatial/Prithvi-EO-1.0-100M