--- viewer: false license: other annotations_creators: - expert-generated language: - en task_categories: - image-to-3d tags: - 3D human reconstruction - human - SMPLX - SMPL - 2K2K - dataset - cvpr2023 - 3D human digitization extra_gated_prompt: >- The dataset is encrypted to prevent unauthorized access. Please fill out the request form : https://forms.gle/oiTkbADGALga9XP18 extra_gated_fields: Name: text Academic Institution: text E-Mail: text Main purpose of data access: type: select options: - Research - Education - label: Other value: other I have signed the GOOGLE form before this request: checkbox pretty_name: CVPR2023 - High-fidelity 3D Human Digitization from Single 2K Resolution Images (2K2K) --- # NOTE ## Please fill out the google form before requesting data access in Huggingface. (with valid "Agreement" PDF file.) ## If you skip it, we will ignore/reject the data access. -> [Google form link](https://forms.gle/oiTkbADGALga9XP18) * There are many requests containing false information. We strictly reject all invalid requests. * If your request was rejected, please double-check your Google Form submission and Hugging Face information, then send us a reminder email. * We kindly request all applicants to **submit their information in English**. Submissions in other languages may be harder to verify and could result in delays in approval. ## Updated (20250414, jseob.y@polygom.xyz) I'm sorry for mis-typo in the file names and unkind explanation. I've updated the names and attached simple sample loader based on Open3D. If you have another issue, please contact via above e-mail. --- # Polygom2K2K Dataset ## High-fidelity 3D Human Digitization from Single 2K Resolution Images [Sang-Hun Han](https://sanghunhan92.github.io/conference/2K2K/), [Min-Gyu Park](https://scholar.google.co.uk/citations?user=VUj1ZWoAAAAJ&hl=en), [Ju Hong Yoon](https://scholar.google.com/citations?user=Y4mReV4AAAAJ&hl=en), Ju-Mi Kang, Young-Jae Park and [Hae-Gon Jeon](https://sites.google.com/site/hgjeoncv/). CVPR 2023 [[Project Page]](https://sanghunhan92.github.io/conference/2K2K/) * This repository provides new [2K2K](https://github.com/SangHunHan92/2K2K/tree/main) dataset updated by *Polygom*. * Specifically, we provides same 1M mesehs of original 2K2K dataset with newly updated *SMPLX* parameters and registered SMPLX via non-rigid fitting. * Both SMPLX rigid and non-rigid fittings are simply optimized by noisy 3D keypoint and chamfer distance to the nearest point. Therefore noticable misalignment can be observed in some region (e.g. hair, hand, and foot). If you need accurately aligned SMPLX, please postprocess it. ![](https://huggingface.co/spaces/polygom-team/AssetSpace/resolve/main/static/2k2k_assets/result.gif) ### Prerequisites Please downalod "SMPLX_NEUTRAL.pkl" and "J_regressor_body25_smplx.txt" from [SMPLX official repo](https://smpl-x.is.tue.mpg.de/) Then, place them into same directory ``` $MODEL_ROOT |-SMPLX_NEUTRAL.PKL |-J_regressor_body25_smplx.txt $DATA_ROOT |- train | |-1M |- test ``` ### Data explanation Polygom2K2K includes 2,050(train 2000, test 50) high-fidility 3D body scans captured by 80 multi-view DSLR cameras by [POLYGOM](http://polygom.studio/). Each scan is simplified to a maximum of 1M vertices. Scans with fewer than 1M vertices are provided in their raw form. The data is provided in PLY format, including vertex colors. Additionally, we provide the results of SMPLX rigid fitting and SMPL (subdivided into 27554 vertices) non-rigid registration. 1) Raw meshes are normalized in [0, 1] x [0, 1] x [0, 1] unit cube and centerd at (x0.5, y0.0, z0.5). (ground plane is xz plane and up-direction is +y.) 2) SMPLX parameters from rigid fitting consist of "poses", "shapes", "expression", "Rh" (global orientation), and "Th" (global translation). Reconstruct it into mesh using the SMPLX layer then multiply "scale". 3) SMPLX final meshes from non-rigid fitting are optimized by [pytorch-nicp](https://github.com/wuhaozhe/pytorch-nicp) ### Data access 1) Please check [this link](https://forms.gle/4cXsuZJUpjth8PHQ6) and fill our google form. 2) After submitting the form, request above data access. 3) We will double-check the form and the request and approve in 2~3 working days. ### Preview data We've uploaded sample loading code named "load_sample.py". ``` # pip install open3d required python load_sample.py --data_root DATA_ROOT --model_root MODEL_ROOT # e.g. python load_sample.py --data_root /media/ssd/2k2k --model_root /media/smplx ``` It will visualize overall sample one-by-one using Open3D visualizer. ### Agreement 1. The 2K2K dataset (the "Dataset") is available for **non-commercial** research purposes only. Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, as training data for a commercial product, for commercial ergonomic analysis (e.g. product design, architectural design, etc.), or production of other artifacts for commercial purposes including, for example, web services, movies, television programs, mobile applications, or video games. The Dataset may not be used for pornographic purposes or to generate pornographic material whether commercial or not. The Dataset may not be reproduced, modified and/or made available in any form to any third party without [POLYGOM](http://polygom.studio/)’s prior written permission. 2. You agree **not to** reproduce, modify, duplicate, copy, sell, trade, resell, or exploit any portion of the images and any portion of derived data in any form to any third party without [POLYGOM](http://polygom.studio/)’s prior written permission. 3. You agree **not to** further copy, publish, or distribute any portion of the Dataset. Except, for internal use at a single site within the same organization it is allowed to make copies of the Dataset. 4. [POLYGOM](http://polygom.studio/) reserves the right to terminate your access to the Dataset at any time. ### Citation If you use this dataset for your research, please consider citing: ``` @InProceedings{han2023Recon2K, title={High-fidelity 3D Human Digitization from Single 2K Resolution Images}, author={Han, Sang-Hun and Park, Min-Gyu and Yoon, Ju Hong and Kang, Ju-Mi and Park, Young-Jae and Jeon, Hae-Gon}, booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR2023)}, month={June}, year={2023}, } @Misc{easymocap, title = {EasyMoCap - Make human motion capture easier.}, howpublished = {Github}, year = {2021}, url = {https://github.com/zju3dv/EasyMocap} } ``` - **Curated by:** jseobyun (jseob.y@polygom.xyz [primary], jseob.y@kaist.ac.kr)