Hunyuan3D 2.5: Towards High-Fidelity 3D Assets Generation with Ultimate Details
Abstract
Hunyuan3D 2.5, a suite of 3D diffusion models, advances shape and texture generation with a new LATTICE model and physical-based rendering in a multi-view architecture.
In this report, we present Hunyuan3D 2.5, a robust suite of 3D diffusion models aimed at generating high-fidelity and detailed textured 3D assets. Hunyuan3D 2.5 follows two-stages pipeline of its previous version Hunyuan3D 2.0, while demonstrating substantial advancements in both shape and texture generation. In terms of shape generation, we introduce a new shape foundation model -- LATTICE, which is trained with scaled high-quality datasets, model-size, and compute. Our largest model reaches 10B parameters and generates sharp and detailed 3D shape with precise image-3D following while keeping mesh surface clean and smooth, significantly closing the gap between generated and handcrafted 3D shapes. In terms of texture generation, it is upgraded with phyiscal-based rendering (PBR) via a novel multi-view architecture extended from Hunyuan3D 2.0 Paint model. Our extensive evaluation shows that Hunyuan3D 2.5 significantly outperforms previous methods in both shape and end-to-end texture generation.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Hunyuan3D 2.1: From Images to High-Fidelity 3D Assets with Production-Ready PBR Material (2025)
- Advancing high-fidelity 3D and Texture Generation with 2.5D latents (2025)
- Bridging Geometry-Coherent Text-to-3D Generation with Multi-View Diffusion Priors and Gaussian Splatting (2025)
- UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes (2025)
- MeshGen: Generating PBR Textured Mesh with Render-Enhanced Auto-Encoder and Generative Data Augmentation (2025)
- CMD: Controllable Multiview Diffusion for 3D Editing and Progressive Generation (2025)
- Step1X-3D: Towards High-Fidelity and Controllable Generation of Textured 3D Assets (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper