位置: IT常识 - 正文

Diffusion-GAN: Training GANs with Diffusion 解读

编辑:rootadmin
Diffusion-GAN: Training GANs with Diffusion 解读

推荐整理分享Diffusion-GAN: Training GANs with Diffusion 解读,希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:,内容如对您有帮助,希望把文章链接给更多的朋友!

 Diffusion-GAN: 将GAN与diffusion一起训练 

paper:https://arxiv.org/abs/2206.02262

code:GitHub - Zhendong-Wang/Diffusion-GAN: Official PyTorch implementation for paper: Diffusion-GAN: Training GANs with Diffusion

  第一行从左向右看是diffusion forward的过程,不断由 real image 进行 diffusion,第三行从右向左看是由noise逐步恢复成fake image的过程,第二行是鉴别器D,D对每一个timestep都进行鉴别。 

 Figure 1: Flowchart for Diffusion-GAN. The top-row images represent the forward diffusion process of a real image, while the bottom-row images represent the forward diffusion process of a generated fake image. The discriminator learns to distinguish a diffused real image from a diffused fake image at all diffusion steps.

in Figure 1. In Diffusion-GAN, the input to the diffusion process is either a real or a generated image, and the diffusion process consists of a series of steps that gradually add noise to the  image. The number of diffusion steps is not fixed, but depends on the data and the generator. We also design the diffusion process to be differentiable, which means that we can compute the derivative of the output with respect to the input. This allows us to propagate the gradient from the discriminator to the generator through the diffusion process, and update the generator accordingly. Unlike vanilla GANs, which compare the real and generated images directly, Diffusion-GAN compares the noisy versions of them, which are obtained by sampling from the Gaussian mixture distribution over the diffusion steps, with the help of our timestep-dependent discriminator. This distribution has the property that its components have different noise-to-data ratios, which means that some components add more noise than others. By sampling from this distribution, we can achieve two benefits: first, we can stabilize the training by easing the problem of vanishing gradient, which occurs when the data and generator distributions are too different; second, we can augment the data by creating different noisy versions of the same image, which can improve the data efficiency and the diversity of the generator. We provide a theoretical analysis to support our method, and show that the min-max objective function of Diffusion-GAN, which measures the difference between the data and generator distributions, is continuous and differentiable everywhere. This means that the generator in theory can always receive a useful gradient from the discriminator, and improve its performance.【G可以从D收到有用的梯度,从而提升G的性能】

主要贡献:

1) We show both theoretically and empirically how the diffusion process can be utilized to provide a model- and domain-agnostic differentiable augmentation, enabling data-efficient and leaking-free stable GAN training.【稳定了GAN的训练】 2) Extensive experiments show that Diffusion-GAN boosts the stability and generation performance of strong baselines, including StyleGAN2 , Projected GAN , and InsGen , achieving state-of-the-art results in synthesizing photo-realistic images, as measured by both the Fréchet Inception Distance (FID)  and Recall score.【diffusion提升了原始只有GAN组成的框架的性能,例如styleGAN2,Projected GAN】

Diffusion-GAN: Training GANs with Diffusion 解读

Figure 2: The toy example inherited from Arjovsky et al. [2017]. The first row plots the distributions of data with diffusion noise injected for t. The second row shows the JS divergence and the optimal discriminator value with and without our noise injection. 

Figure 4: Plot of adaptively adjusted maximum diffusion steps T and discriminator outputs of Diffusion-GANs. 

To investigate how the adaptive diffusion process works during training, we illustrate in Figure 4 the convergence of the maximum timestep T in our adaptive diffusion and discriminator outputs. We see that T is adaptively adjusted: The T for Diffusion StyleGAN2 increases as the training goes while the T for Diffusion ProjectedGAN first goes up and then goes down. Note that the T is adjusted according to the overfitting status of the discriminator. The second panel shows that trained with the diffusion-based mixture distribution, the discriminator is always well-behaved and provides useful learning signals for the generator, which validates our analysis in Section 3.4 and Theorem 1.

如图4左所示,随着训练过程的变化,扩散的timestep T也会自适应的改变(T通过鉴别器D过拟合的状态而改变); 如图4右所示,用基于扩散的混合分布训练的鉴别器总是表现良好,并为生成器G提供有用的学习信号。

Effectiveness of Diffusion-GAN for domain-agnostic augmentation(未知域增强的有效性)

25-Gaussians Example.

We conduct experiments on the popular 25-Gaussians generation task. The 25-Gaussians dataset is a 2-D toy data, generated by a mixture of 25 two-dimensional Gaussian distributions. Each data point is a 2-dimensional feature vector. We train a small GAN model, whose generator and discriminator are both parameterized by multilayer perceptrons (MLPs), with two 128-unit hidden layers and LeakyReLu nonlinearities.

Figure 5: The 25-Gaussians example. We show the true data samples, the generated samples from vanilla GANs, the discriminator outputs of the vanilla GANs, the generated samples from our Diffusion-GAN, and the discriminator outputs of Diffusion-GAN. 

(1)groundtruth数据集的数据分布,在25个Gaussians example均匀分布; (2)vanilla GANs的输出结果产生了mode collapsing,只在几个model上生成数据; (3)vanilla GANs鉴别器输出很快就会彼此分离。这意味着发生了鉴别器的强烈过拟合,使得鉴别器停止为发生器提供有用的学习信号。 (4)Diffusion-GAN在25个example上均匀分布,意味着它在所有的model上学到了采样分布; (5)Diffusion-GAN的鉴别器输出,D在持续的为G提供有用的学习信号

我们从两个角度来解释这种改进: 首先,non-leaking augmentation(无泄漏增强)有助于提供关于数据空间的更多信息;第二,自适应调整的基于扩散的噪声注入,鉴别器表现良好。

关于 Difffferentiable augmentation. (可微分增强)

As Diffusion-GAN transforms both the data and generated samples before sending them to the discriminator, we can also relate it to differentiable augmentation proposed for data-efficient GAN training. Karras et al introduce a stochastic augmentation pipeline with 18 transformationsand develop an adaptive mechanism for controlling the augmentation probability. Zhao et al. [2020] propose to use Color + Translation + Cutout as differentiable augmentations for both generated and real images.

While providing good empirical results on some datasets, these augmentation methods are developed with domain-specific knowledge and have the risk of leaking augmentation  into generation [Karras et al., 2020a]. As observed in our experiments, they sometime worsen the results when applied to a new dataset, likely because the risk of augmentation leakage overpowers the benefits of enlarging the training set, which could happen especially if the training set size is already sufficiently large.(在数据量足够大的情况下,数据增强带来的负面效果可能大于正面效果)

By contrast, Diffusion-GAN uses a differentiable forward diffusion process to stochastically transform the data and can be considered as both a domain-agnostic and a model-agnostic augmentation method. In other words, Diffusion-GAN can be applied to non-image data or even latent features, for which appropriate data augmentation is difficult to be defined, and easily plugged into an existing GAN to improve its generation performance. Moreover, we prove in theory and show in experiments that augmentation leakage is not a concern for Diffusion-GAN. Tran et al. [2021] provide a theoretical analysis for deterministic non-leaking transformation with differentiable and invertible mapping functions. Bora et al. [2018] show similar theorems to us for specific stochastic transformations, such as Gaussian Projection, Convolve+Noise, and stochastic Block-Pixels, while our Theorem 2 includes more satisfying possibilities as discussed in Appendix B.

本文链接地址:https://www.jiuchutong.com/zhishi/294494.html 转载请保留说明!

上一篇:Vue|非单文件组件(vuecli非根目录打包)

下一篇:【HTML】原生js实现的图书馆管理系统(javascript原生)

  • 哪些准备金支出可实现税前扣除?
  • 外购商品结转成本分录
  • 代开普通发票需提供哪些材料
  • 个税两种申报方式哪种好
  • 油费可以抵扣进项税额
  • 购进库存商品到销售全部分录
  • 月报和季度报的区别
  • 技术成果投资入股企业所得税递延纳税备案表
  • 异地经营如何纳税
  • 增值税发票污染了能补开吗
  • 长期股权投资成本法转权益法追溯调整
  • 会计档案交接怎么填写
  • 建设工程的停工损失包括哪些内容
  • 收到去年所得税汇算清缴退税账务处理
  • 管理费用月末怎么结账
  • 净残值可以随意更改吗
  • 如何查询分公司开户行
  • 建筑业异地预缴税款的会计分录
  • 企业所得税清算期间
  • 收到福利费专票需要认证吗
  • 上海电商行业怎么样
  • 设立独立核算的销售机构的筹划
  • 分公司是否需要章程
  • 不开票的收入怎么避税
  • 飞机票开电子发票是电子行程单吗
  • 事业单位收受礼品怎么处理
  • 土地长期租赁最长多长时间
  • 定额发票收入怎么样确定
  • php字符串定义
  • 来料加工方式中,料件和加工后
  • 工程改造怎么做账
  • Vue3 & app.use 与 install 函数的作用
  • laravel app接口
  • 用抵扣券买了东西可以退吗
  • 固定资产已提完折旧后丢失怎么处理
  • 基于专业性的家校双向互动,需要家长的学校教育参与
  • vue中利用ref实现更灵活的子向父传值
  • imu模型
  • 用php编写一个简单的计算器程序
  • 个税系统怎么查询已申报个人明细
  • 企业的研发费用如何进行账务处理
  • 工程款发票的数量和单价
  • python中series的用法
  • 织梦系统安装教程
  • 认缴制下实收资本如何证明
  • 固定资产清理的金额怎么算
  • 发票已开款未到的会计分录?
  • 其他业务收入如何核算
  • 进项税额抵扣如何做账
  • 让渡资产使用权什么意思
  • 小规模公司销项发票税额记到哪里了
  • 工程履约保证金退还申请书
  • 账务处理程序的种类及各自的适用范围
  • 坏账准备的核算公式
  • 公允价值变动损益是什么意思
  • 盈余公积根据什么确定
  • 事业单位无形资产包括哪些
  • 积分中的换元怎么使用
  • 在windows中在下列叙述中正确的是
  • 海尔笔记本最新款
  • 登录系统错误
  • cmd命令行删除文件
  • win7旗舰版如何禁止更新
  • 安装windows 8.1
  • win8.1应用商店无法打开
  • win7系统自动弹出搜索框
  • 你将会收藏
  • msoobe命令
  • perl匹配空行
  • python做开发
  • javascript怎么弄
  • 长沙税务局网上开票
  • 贵州省网上税务局要那个版本
  • 专项调查法
  • 长春购房契税税率
  • 皖事通新农合缴费征收方式是什么?
  • 朝阳区第六税务所
  • 税控盘操作指南
  • 税收负担与税负转嫁
  • 新华保险有返本金吗
  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设