Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

DTTD-Mobile: Digital Twin Tracking Dataset Calibrated with a Optical MoCap System and a Mobile Device

πŸ€” Are current 3D object tracking methods truely robust enough for low-fidelity depth sensors like the iPhone LiDAR? We provide a new dataset on a mobile device: 18 objects observed in 100 videos with 47,668 sampled frames and 114,143 object annotations.

🌹 If our work is useful or relevant to your research, please kindly recognize our contributions by citing our papers:

@InProceedings{Huang_2025_CVPR,
    author    = {Huang, Zixun and Yao, Keling and Zhao, Zhihao and Pan, Chuanyu and Yang, Allen},
    title     = {Robust 6DoF Pose Estimation Against Depth Noise and a Comprehensive Evaluation on a Mobile Dataset},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops},
    month     = {June},
    year      = {2025},
    pages     = {1848-1857}
}

image/jpeg

This repository contains an example dataset.py and simulated file structure within root. It is the dataset for the 2025 CVPRW paper Robust 6DoF Pose Estimation Against Depth Noise and a Comprehensive Evaluation on a Mobile Dataset.

🚩 Project page: https://openark-berkeley.github.io/DTTDNet/

🦾 Extension to Robotic Tasks: https://huggingface.co/datasets/ZixunH/DTTD3_Impedance

πŸ† Leaderboard Embedded: https://paperswithcode.com/dataset/dttd2

🧸 You can also access our Transformer-based 6-DoF pose estimation network from: https://github.com/augcog/DTTD2

🍽️ And the checkpoints (trained on DTTD-Mobile): https://drive.google.com/drive/folders/1U7YJKSrlWOY5h2MJRc_cwJPkQ8600jbd?usp=share_link

πŸ“ Folder Structure (including cad models, rgb images, semantic images, depth images, camera intrinsics, object poses, etc):

Robust-Digital-Twin-Tracking
β”œβ”€β”€ checkpoints
β”‚   β”œβ”€β”€ m8p4.pth
β”‚   β”œβ”€β”€ m8p4_filter.pth
β”‚   └── ...
|   ...
└── dataset
    └── dttd_iphone
        β”œβ”€β”€ dataset_config
        β”œβ”€β”€ dataset.py
        └── DTTD_IPhone_Dataset
            └── root
                β”œβ”€β”€ cameras
                β”‚   β”œβ”€β”€ az_camera1 (if you want to train our algorithm with DTTD v1)
                β”‚   β”œβ”€β”€ iphone14pro_camera1
                β”‚   └── ZED2 (to be released...)
                β”œβ”€β”€ data
                β”‚   β”œβ”€β”€ scene1
                β”‚   β”‚   └── data
                β”‚   β”‚   β”‚   β”œβ”€β”€ 00001_color.jpg
                β”‚   β”‚   β”‚   β”œβ”€β”€ 00001_depth.png
                β”‚   β”‚   β”‚   └── ...
                |   β”‚   └── scene_meta.yaml
                β”‚   β”œβ”€β”€ scene2
                β”‚   β”‚   └── data
                |   β”‚   └── scene_meta.yaml
                β”‚   ...
                └── objects
                    β”œβ”€β”€ apple
                    β”‚   β”œβ”€β”€ apple.mtl
                    β”‚   β”œβ”€β”€ apple.obj
                    β”‚   β”œβ”€β”€ front.xyz
                    β”‚   β”œβ”€β”€ points.xyz
                    β”‚   β”œβ”€β”€ ...
                    β”‚   └── textured.obj.mtl
                    β”œβ”€β”€ black_expo_marker
                    └── ...
Downloads last month
155