Hanyang Wang | 汪晗阳

I'm currently a junior undergraduate student in the Department of Computer Science and Technology at Tsinghua University. I'm also serving as an intern working closly with Prof. Yueqi Duan and Fangfu Liu.

My research interests lie in 3D Computer Vision and AIGC.

Email  /  CV  /  Scholar  /  Github

profile photo

Research

* indicates equal contribution

dise ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
Fangfu Liu*, Wenqiang Sun* , Hanyang Wang* , Yikai Wang , Haowen Sun , Junliang Ye ,
Jun Zhang , Yueqi Duan
Arxiv, 2024
[arXiv] [Code] [Project Page]

In this paper, we propose ReconX, a novel 3D scene reconstruction paradigm that reframes the ambiguous reconstruction challenge as a temporal generation task. The key insight is to unleash the strong generative prior of large pre-trained video diffusion models for sparse-view reconstruction.

dise Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion
Fangfu Liu*, Hanyang Wang* Shunyu Yao , Shengjun Zhang, Jie Zhou, Yueqi Duan
Arxiv, 2024
[arXiv] [Code] [Project Page]

In this paper, we propose Physics3D, a novel method for learning various physical properties of 3D objects through a video diffusion model. Our approach involves designing a highly generalizable physical simulation system based on a viscoelastic material model, which enables us to simulate a wide range of materials with high-fidelity capabilities.

dise Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image
Kailu Wu , Fangfu Liu, Zhihan Cai, Runjie Yan, Hanyang Wang, Yating Hu,
Yueqi Duan , Kaisheng Ma
Conference on Neural Information Processing Systems (NeurIPS), 2024
[arXiv] [Code] [Project Page]

In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Unique3D can generate a high-fidelity textured mesh from a single orthogonal RGB image of any object in under 30 seconds.

dise Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation
Fangfu Liu, Hanyang Wang, Weiliang Chen, Haowen Sun, Yueqi Duan
European Conference on Computer Vision (ECCV), 2024
[arXiv] [Code] [Project Page]

We introduce a novel 3D customization method, dubbed Make-Your-3D that can personalize high-fidelity and consistent 3D content from only a single image of a subject with text description within 5 minutes.


Website Template