一区二区日本_久久久久久久国产精品_无码国模国产在线观看_久久99深爱久久99精品_亚洲一区二区三区四区五区午夜_日本在线观看一区二区

DreamFusion: Text-to-3D using 2D Diffusion

Ben Poole
Google Research
Ajay Jain
UC Berkeley
Jonathan T. Barron
Google Research
Ben Mildenhall
Google Research
Paper Project Gallery

Abstract

Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss. The resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment. Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.

Given a caption, DreamFusion generates relightable 3D objects with high-fidelity appearance, depth, and normals. Objects are represented as a Neural Radiance Field and leverage a pretrained text-to-image diffusion prior such as Imagen.

Generate 3D from text yourself!


Example generated objects

DreamFusion generates objects and scenes from diverse captions. Search through hundreds of generated assets in our full gallery.


Composing objects into a scene


Mesh exports

Our generated NeRF models can be exported to meshes using the marching cubes algorithm for easy integration into 3D renderers or modeling software.


How does DreamFusion work?

Given a caption, DreamFusion uses a text-to-image generative model called Imagen to optimize a 3D scene. We propose Score Distillation Sampling (SDS), a way to generate samples from a diffusion model by optimizing a loss function. SDS allows us to optimize samples in an arbitrary parameter space, such as a 3D space, as long as we can map back to images differentiably. We use a 3D scene parameterization similar to Neural Radiance Fields, or NeRFs, to define this differentiable mapping. SDS alone produces reasonable scene appearance, but DreamFusion adds additional regularizers and optimization strategies to improve geometry. The resulting trained NeRFs are coherent, with high-quality normals, surface geometry and depth, and are relightable with a Lambertian shading model.


Citation

@article{poole2022dreamfusion,
  author = {Poole, Ben and Jain, Ajay and Barron, Jonathan T. and Mildenhall, Ben},
  title = {DreamFusion: Text-to-3D using 2D Diffusion},
  journal = {arXiv},
  year = {2022},
}
主站蜘蛛池模板: 久久tv在线观看 | 色婷婷av一区二区三区软件 | 可以免费观看的av | 日本精品久久久久久久 | 免费一级片 | 一级片网址 | 国产7777| 欧美一级毛片久久99精品蜜桃 | 亚洲成人一区 | 精品国产一区二区久久 | 91精品一区二区三区久久久久 | 一级二级三级在线观看 | 欧美在线视频观看 | 天天色天天射天天干 | 深爱激情综合 | 91视频88av| 国产高清在线视频 | 国产精品毛片一区二区在线看 | 久久综合一区二区三区 | 欧美精品一区二区在线观看 | 久久久国产一区二区三区 | 黄a在线播放 | 青青草在线播放 | 人人干在线视频 | 午夜网站视频 | 久久久精品视频免费 | 一区二区三区在线电影 | av片网站| 国产精品久久久久久久久久三级 | 99久久婷婷国产综合精品电影 | 91精品国产综合久久婷婷香蕉 | 日韩精品一区二区三区在线观看 | 久久天天躁狠狠躁夜夜躁2014 | 亚洲福利在线观看 | 永久免费在线观看 | 国产一区三区视频 | 久久中文免费视频 | 影音先锋男 | 久久久www成人免费精品张筱雨 | 亚洲乱码一区二区三区在线观看 | 国产日韩精品一区 |