精华内容
下载资源
问答
  • MNIST的条件DCGAN 这是MNIST数据集的手写数字的生成模型。 它将推荐的DCGAN架构结合,并输入建议的标签。 为什么要有条件GAN? 在我的,我使用DCGAN以无人监督的方式生成MNIST数字-尽管MNIST是带标签的数据集,但...
  • 4-DCGAN实战.ipynb

    2020-07-20 10:45:56
    一、DCGAN论文要点 通过跨步卷积层(例如:stride= 2)代替池化层(例如:最大池化层) 取消全连接层,用全局平均池化层代替(在判别器的最后一层) 所有层的输出都用BatchNormalization做归一化 生成器使用转置卷...
  • 使用DCGAN在3D MRI中检测脑肿瘤 在Tensorflow中实施DCGAN以在脑图像扫描中执行肿瘤分割 语义分割是医学图像分析不可或缺的一部分,对于这些领域而言,深度学习方面的突破具有高度的相关性。 将输入图像的像素归为...
  • KERAS-DCGAN 具有(awesome) 库的实现,用于通过深度学习生成人工图像。 这将在真实图像上训练两个对抗性深度学习模型,以产生看起来真实的人工图像。 生成器模型尝试生成看起来真实的图像,并从鉴别器中获得...
  • 通过github下载DCGAN-tensorflow的代码,送上链接: [https://github.com/carpedm20/DCGAN-tensorflow] 然后在其根目录下建立data文件夹,进入data文件夹,建立自己的数据集。 将其命名为licence,(你的数据集自己...
  • TensorLayer中的DCGAN 这是的TensorLayer实现。 寻找文本到图像合成? :NEW_button: :fire: 2019年5月:我们只是更新了此项目以支持TF2和TL2。 请享用! :NEW_button: :fire: 2019年5月:该项目被选为TL项目的...
  • 今天小编就为大家分享一篇Pytorch使用MNIST数据集实现基础GAN和DCGAN详解,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
  • DCGAN LSGAN WGAN-GP 德拉甘 名人 DCGAN LSGAN WGAN-GP 德拉甘 日本动漫 WGAN-GP 德拉甘 用法 环境 Python 3.6 TensorFlow 2.2,TensorFlow插件0.10.0 OpenCV,scikit映像,tqdm,oyaml 我们建议
  • DCGAN

    2020-01-09 18:21:24
    DCGAN flyfish DCGAN是对GAN的直接扩展,分别在判别器和生成器中明确地使用卷积和卷积转置层。 –dataset 指定训练数据集 –dataroot 数据集的根目录 –workers DataLoader进行数据预处理及数据加载使用进程数 ...

    DCGAN

    flyfish

    DCGAN是对GAN的直接扩展,分别在判别器和生成器中明确地使用卷积和卷积转置层。

    –dataset 指定训练数据集
    –dataroot 数据集的根目录
    –workers DataLoader进行数据预处理及数据加载使用进程数
    –batchSize 训练中使用的batch大小,DCGAN论文使用的是128 ,我们这里使用的是64
    –imageSize 用于训练的图片spatial大小,默认是64*64
    –nz length of latent vector latent向量的大小

    –ngf ngf乘以一个数,结果是生成器特征图feature map)的大小
    –ndf ngf乘以一个数,结果是判别器特征图feature map)的大小
    –niter 网络训练过程中epoch数目
    –lr 初始学习率
    –beta1 使用Adam优化算法中的β1,论文中取值为 0.5

    –cuda 指定使用GPU进行训练
    –netG 指定生成器的检查点文件(保存的生成器的权值文件)
    –netD 指定判别器的检查点文件(保存的判别器的权值文件)
    –outf 模型输出图片以及检查点文件的保存路径
    –manualSeed 指定生成随机数的seed
    nc - 输入图片的色彩通道,对彩色图片来说为 3,灰度图是1

    latent vector有的地方翻译是本征向量、潜在向量

    函数解释
    class torchvision.transforms.Normalize(mean, std) [-1,1]
    给定均值:(R,G,B) 方差:(R,G,B),将会把Tensor正则化。即:Normalized_image=(image-mean)/std。
    图片的数据范围四[0,1],mean = [.5, .5, .5],std = [.5, .5, .5],根据上述式子计算
    (0−0.5)/0.5=−1(0−0.5)/0.5=−1 ,(1−0.5)/0.5=1(1−0.5)/0.5=1就可将数据归一化到[-1,1]。

    class torchvision.transforms.ToTensor [0,1]
    把一个取值范围是[0,255]的PIL.Image或者shape为(H,W,C)的numpy.ndarray,转换成形状为[C,H,W],取值范围是[0,1.0]的torch.FloadTensor

    生成器
    生成器的目的是将 从标准正态分布中采样得到的一个本征向量(latent vector)映射到真的数据空间。z转换为 3x64x64的RGB图像。过程是通过转置卷积后跟一个二维的batch norm层和一个relu激活层。生成器的最后是 tanh函数以便满足输出范围为[−1,1]。
    生成器由转置卷积层,批标准化层以及ReLU 激活层组成

    Generator(
      (main): Sequential(
        (0): ConvTranspose2d(100, 512, kernel_size=(4, 4), stride=(1, 1), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU(inplace=True)
        (3): ConvTranspose2d(512, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (5): ReLU(inplace=True)
        (6): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (7): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (8): ReLU(inplace=True)
        (9): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (10): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (11): ReLU(inplace=True)
        (12): ConvTranspose2d(64, 1, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (13): Tanh()
      )
    )
    

    判别器
    判别器是一个二元分类器( binary classification)输出图片为真的概率。判别器D的输入为 3x64x64 的图片,依次通过卷积层,BN层,LeakyReLU层,然后通过sigmoid激活函数输出图片为真的概率.
    按照Goodfellow,的话说 就是 通过提升其随机梯度来更新判别器。(we wish to “update the discriminator by ascending its stochastic gradient”) 转置卷积层能够把z转换的结果与图像的大小相同。

    Discriminator(
      (main): Sequential(
        (0): Conv2d(1, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (1): LeakyReLU(negative_slope=0.2, inplace=True)
        (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): LeakyReLU(negative_slope=0.2, inplace=True)
        (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (6): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): LeakyReLU(negative_slope=0.2, inplace=True)
        (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
        (9): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (10): LeakyReLU(negative_slope=0.2, inplace=True)
        (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), bias=False)
        (12): Sigmoid()
      )
    )
    

    训练过程

    [0/25][0/938] Loss_D: 1.1207 Loss_G: 6.3690 D(x): 0.6912 D(G(z)): 0.4628 / 0.0022
    [0/25][1/938] Loss_D: 0.4970 Loss_G: 7.1294 D(x): 0.9850 D(G(z)): 0.3394 / 0.0012
    

    detach()
    方法使得在训练判别网络的时候,生成网络保持冻结,不会记录用于autograd的operations。
    detach()
    Returns a new Tensor, detached from the current graph.
    The result will never require gradient.

    y=A(x), z=B(y)
    求B中参数的梯度,不求A中参数的梯度

    y = A(x)
    z = B(y.detach())
    z.backward()
    

    DCGAN论文中的生成器图片如下
    在这里插入图片描述
    图片来自
    Unsupervised Representation Learning With Deep Convolutional Generative Adversarial Networks

    代码中fixed_noise的作用

    生成一个的来自于高斯分布固定不变的本征向量,以保存追踪生成器学习的过程

    代码中的权重初始化def weights_init(m)

    在DCGAN论文中,作者指出所有模型权重应从均值为0,方差为0.2的正态分布随机初始化。 weights_init函数将未初始化模型作为输入,并初始化所有卷积,卷积转置和批标准化层以满足此要求。

    from __future__ import print_function
    import argparse
    import os
    import random
    import torch
    import torch.nn as nn
    import torch.nn.parallel
    import torch.backends.cudnn as cudnn
    import torch.optim as optim
    import torch.utils.data
    import torchvision.datasets as dset
    import torchvision.transforms as transforms
    import torchvision.utils as vutils
    
    
    parser = argparse.ArgumentParser()
    parser.add_argument('--dataset',  default='mnist',help='cifar10 | lsun | mnist |imagenet | folder | lfw | fake')
    parser.add_argument('--dataroot',default='../data',help='path to dataset')
    parser.add_argument('--workers', type=int, help='number of data loading workers', default=2)
    parser.add_argument('--batchSize', type=int, default=64, help='input batch size')
    parser.add_argument('--imageSize', type=int, default=64, help='the height / width of the input image to network')
    parser.add_argument('--nz', type=int, default=100, help='size of the latent z vector')
    parser.add_argument('--ngf', type=int, default=64)
    parser.add_argument('--ndf', type=int, default=64)
    parser.add_argument('--niter', type=int, default=1, help='number of epochs to train for')#default=25
    parser.add_argument('--lr', type=float, default=0.0002, help='learning rate, default=0.0002')
    parser.add_argument('--beta1', type=float, default=0.5, help='beta1 for adam. default=0.5')
    parser.add_argument('--cuda', action='store_true', help='enables cuda')
    parser.add_argument('--ngpu', type=int, default=1, help='number of GPUs to use')
    parser.add_argument('--netG', default='', help="path to netG (to continue training)")
    parser.add_argument('--netD', default='', help="path to netD (to continue training)")
    parser.add_argument('--outf', default='.', help='folder to output images and model checkpoints')
    parser.add_argument('--manualSeed', type=int, help='manual seed')
    parser.add_argument('--classes', default='bedroom', help='comma separated list of classes for the lsun data set')
    
    opt = parser.parse_args()
    print(opt)
    
    try:
        os.makedirs(opt.outf)
    except OSError:
        pass
    
    if opt.manualSeed is None:
        opt.manualSeed = random.randint(1, 10000)
    print("Random Seed: ", opt.manualSeed)
    random.seed(opt.manualSeed)
    torch.manual_seed(opt.manualSeed)
    
    cudnn.benchmark = True
    
    if torch.cuda.is_available() and not opt.cuda:
        print("WARNING: You have a CUDA device, so you should probably run with --cuda")
    
    #各种数据集
    
    if opt.dataset in ['imagenet', 'folder', 'lfw']:
        # folder dataset
        dataset = dset.ImageFolder(root=opt.dataroot,
                                   transform=transforms.Compose([
                                       transforms.Resize(opt.imageSize),
                                       transforms.CenterCrop(opt.imageSize),
                                       transforms.ToTensor(),
                                       transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                                   ]))
        nc=3
    elif opt.dataset == 'lsun':
        classes = [ c + '_train' for c in opt.classes.split(',')]
        dataset = dset.LSUN(root=opt.dataroot, classes=classes,
                            transform=transforms.Compose([
                                transforms.Resize(opt.imageSize),
                                transforms.CenterCrop(opt.imageSize),
                                transforms.ToTensor(),
                                transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                            ]))
        nc=3
    elif opt.dataset == 'cifar10':
        dataset = dset.CIFAR10(root=opt.dataroot, download=True,
                               transform=transforms.Compose([
                                   transforms.Resize(opt.imageSize),
                                   transforms.ToTensor(),
                                   transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
                               ]))
        nc=3
    
    elif opt.dataset == 'mnist':#1 channel,transforms.Normalize example
            dataset = dset.MNIST(root=opt.dataroot, download=True,
                               transform=transforms.Compose([
                                   transforms.Resize(opt.imageSize),
                                   transforms.ToTensor(),
                                   transforms.Normalize((0.5,), (0.5,)),
                               ]))
            nc=1
    
    elif opt.dataset == 'fake':
        dataset = dset.FakeData(image_size=(3, opt.imageSize, opt.imageSize),
                                transform=transforms.ToTensor())
        nc=3
    
    assert dataset
    dataloader = torch.utils.data.DataLoader(dataset, batch_size=opt.batchSize,
                                             shuffle=True, num_workers=int(opt.workers))
    #是否使用GPU
    device = torch.device("cuda:0" if opt.cuda else "cpu")
    ngpu = int(opt.ngpu)
    nz = int(opt.nz)
    ngf = int(opt.ngf)
    ndf = int(opt.ndf)
    
    
    # custom weights initialization called on netG and netD
    #From the DCGAN paper, the authors specify that all model weights shall be randomly initialized from a Normal distribution with mean=0, stdev=0.02. 
    #在DCGAN的论文中,作者指明所有的权重都以均值为0,标准差为0.2的正态分布随机初始化。
    def weights_init(m):
        classname = m.__class__.__name__
        if classname.find('Conv') != -1:
            m.weight.data.normal_(0.0, 0.02)
        elif classname.find('BatchNorm') != -1:
            m.weight.data.normal_(1.0, 0.02)
            m.bias.data.fill_(0)
    
    #生成器网络G
    class Generator(nn.Module):#-1,1
        def __init__(self, ngpu):
            super(Generator, self).__init__()
            self.ngpu = ngpu
            self.main = nn.Sequential(
                # input is Z, going into a convolution
                
                nn.ConvTranspose2d(     nz, ngf * 8, 4, 1, 0, bias=False),
                nn.BatchNorm2d(ngf * 8),
                nn.ReLU(True),
                # state size. (ngf*8) x 4 x 4
                nn.ConvTranspose2d(ngf * 8, ngf * 4, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ngf * 4),
                nn.ReLU(True),
                # state size. (ngf*4) x 8 x 8
                nn.ConvTranspose2d(ngf * 4, ngf * 2, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ngf * 2),
                nn.ReLU(True),
                # state size. (ngf*2) x 16 x 16
                nn.ConvTranspose2d(ngf * 2,     ngf, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ngf),
                nn.ReLU(True),
                # state size. (ngf) x 32 x 32
                nn.ConvTranspose2d(    ngf,      nc, 4, 2, 1, bias=False),
                nn.Tanh()
                # state size. (nc) x 64 x 64
            )
    
        def forward(self, input):
            if input.is_cuda and self.ngpu > 1:
                output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
            else:
                output = self.main(input)
            return output
    
    
    netG = Generator(ngpu).to(device)
    netG.apply(weights_init)
    if opt.netG != '':
        netG.load_state_dict(torch.load(opt.netG))
    print(netG)
    
    #判别网络D
    class Discriminator(nn.Module):#0,1
        def __init__(self, ngpu):
            super(Discriminator, self).__init__()
            self.ngpu = ngpu
            self.main = nn.Sequential(
                # input is (nc) x 64 x 64
                nn.Conv2d(nc, ndf, 4, 2, 1, bias=False),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf) x 32 x 32
                nn.Conv2d(ndf, ndf * 2, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ndf * 2),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf*2) x 16 x 16
                nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ndf * 4),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf*4) x 8 x 8
                nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 1, bias=False),
                nn.BatchNorm2d(ndf * 8),
                nn.LeakyReLU(0.2, inplace=True),
                # state size. (ndf*8) x 4 x 4
                nn.Conv2d(ndf * 8, 1, 4, 1, 0, bias=False),
                nn.Sigmoid()
            )
    
        def forward(self, input):
            if input.is_cuda and self.ngpu > 1:
                output = nn.parallel.data_parallel(self.main, input, range(self.ngpu))
            else:
                output = self.main(input)
    
            return output.view(-1, 1).squeeze(1)
    
    
    netD = Discriminator(ngpu).to(device)
    netD.apply(weights_init)
    if opt.netD != '':
        netD.load_state_dict(torch.load(opt.netD))
    print(netD)
    
    criterion = nn.BCELoss()
    
    fixed_noise = torch.randn(opt.batchSize, nz, 1, 1, device=device)
    print(fixed_noise.shape)#torch.Size([64, 100, 1, 1])
    real_label = 1
    fake_label = 0
    
    # setup optimizer
    optimizerD = optim.Adam(netD.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
    optimizerG = optim.Adam(netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
    
    for epoch in range(opt.niter):
        for i, data in enumerate(dataloader, 0):
            ############################
            # (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
            # 固定生成器G,训练判别器D
            ###########################
            # train with real      
            #让D尽可能的把真图片判别为1
            netD.zero_grad()
            real_cpu = data[0].to(device)
            batch_size = real_cpu.size(0)
            label = torch.full((batch_size,), real_label, device=device)
    
            output = netD(real_cpu)
            errD_real = criterion(output, label)
            errD_real.backward()
            D_x = output.mean().item()
    
            # train with fake
            #目的:让D尽可能把假图片判别为0
            noise = torch.randn(batch_size, nz, 1, 1, device=device)
            fake = netG(noise)
            label.fill_(fake_label)
            output = netD(fake.detach())#G不用更新
            errD_fake = criterion(output, label)
            errD_fake.backward()
            D_G_z1 = output.mean().item()
            errD = errD_real + errD_fake
            optimizerD.step()
    
            ############################
            # (2) Update G network: maximize log(D(G(z)))
            # 固定判别器D,训练生成器G
            # 目的:让D尽可能把G生成的假图判别为1
            ###########################
            netG.zero_grad()
            label.fill_(real_label)  # fake labels are real for generator cost
            output = netD(fake)
            errG = criterion(output, label)
            errG.backward()
            D_G_z2 = output.mean().item()
            optimizerG.step()
    
            print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f D(x): %.4f D(G(z)): %.4f / %.4f'
                  % (epoch, opt.niter, i, len(dataloader)
                     errD.item(), errG.item(), D_x, D_G_z1, D_G_z2))
            if i % 100 == 0:
                vutils.save_image(real_cpu,
                        '%s/real_samples.png' % opt.outf,
                        normalize=True)
                fake = netG(fixed_noise)
                vutils.save_image(fake.detach(),
                        '%s/fake_samples_epoch_%03d.png' % (opt.outf, epoch),
                        normalize=True)
    
        # do checkpointing
        torch.save(netG.state_dict(), '%s/netG_epoch_%d.pth' % (opt.outf, epoch))
        torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (opt.outf, epoch))
    
    展开全文
  • python开发 DCGAN实例

    2018-07-17 16:24:04
    Python语言 DCGAN代码包 完整代码 卷积生成式对抗网络
  • DCGAN-tensorflow核心是model.py ,model.py定义了生成器和判别器,其中生成器使用deconv2d,判别器使用conv2d
  • GAN 和 DCGAN 网络生成图像,采用python语言编写程序实现生成网络,并给出网络的具体训练代码
  • DCGAN_generate_anime_pictures 生成对抗网咯(GAN)是一类在无监督学习中使用的神经网络,其有效解决按文本生成图像,提高图片分辨率,药物匹配,检索特定模式的图片等任务。本库包含利用具体实验步骤已提交到我的...
  • dcgan

    2019-04-25 16:04:56
    来源:...""" Deep Convolutional Generative Adversarial Network (DCGAN). Using deep convolutional generative adversarial networks (DCGAN) to ge...

    来源:https://github.com/aymericdamien/TensorFlow-Examples#tutorials

    """ Deep Convolutional Generative Adversarial Network (DCGAN).
    
    Using deep convolutional generative adversarial networks (DCGAN) to generate
    digit images from a noise distribution.
    
    References:
        - Unsupervised representation learning with deep convolutional generative
        adversarial networks. A Radford, L Metz, S Chintala. arXiv:1511.06434.
    
    Links:
        - [DCGAN Paper](https://arxiv.org/abs/1511.06434).
        - [MNIST Dataset](http://yann.lecun.com/exdb/mnist/).
    
    Author: Aymeric Damien
    Project: https://github.com/aymericdamien/TensorFlow-Examples/
    """
    
    from __future__ import division, print_function, absolute_import
    
    import matplotlib.pyplot as plt
    import numpy as np
    import tensorflow as tf
    
    # Import MNIST data
    from tensorflow.examples.tutorials.mnist import input_data
    mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
    
    # Training Params
    num_steps = 20000
    batch_size = 32
    
    # Network Params
    image_dim = 784 # 28*28 pixels * 1 channel
    gen_hidden_dim = 256
    disc_hidden_dim = 256
    noise_dim = 200 # Noise data points
    
    
    # Generator Network
    # Input: Noise, Output: Image
    def generator(x, reuse=False):
        with tf.variable_scope('Generator', reuse=reuse):
            # TensorFlow Layers automatically create variables and calculate their
            # shape, based on the input.
            x = tf.layers.dense(x, units=6 * 6 * 128)
            x = tf.nn.tanh(x)
            # Reshape to a 4-D array of images: (batch, height, width, channels)
            # New shape: (batch, 6, 6, 128)
            x = tf.reshape(x, shape=[-1, 6, 6, 128])
            # Deconvolution, image shape: (batch, 14, 14, 64)
            x = tf.layers.conv2d_transpose(x, 64, 4, strides=2)
            # Deconvolution, image shape: (batch, 28, 28, 1)
            x = tf.layers.conv2d_transpose(x, 1, 2, strides=2)
            # Apply sigmoid to clip values between 0 and 1
            x = tf.nn.sigmoid(x)
            return x
    
    
    # Discriminator Network
    # Input: Image, Output: Prediction Real/Fake Image
    def discriminator(x, reuse=False):
        with tf.variable_scope('Discriminator', reuse=reuse):
            # Typical convolutional neural network to classify images.
            x = tf.layers.conv2d(x, 64, 5)
            x = tf.nn.tanh(x)
            x = tf.layers.average_pooling2d(x, 2, 2)
            x = tf.layers.conv2d(x, 128, 5)
            x = tf.nn.tanh(x)
            x = tf.layers.average_pooling2d(x, 2, 2)
            x = tf.contrib.layers.flatten(x)
            x = tf.layers.dense(x, 1024)
            x = tf.nn.tanh(x)
            # Output 2 classes: Real and Fake images
            x = tf.layers.dense(x, 2)
        return x
    
    # Build Networks
    # Network Inputs
    noise_input = tf.placeholder(tf.float32, shape=[None, noise_dim])
    real_image_input = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])
    
    # Build Generator Network
    gen_sample = generator(noise_input)
    
    # Build 2 Discriminator Networks (one from noise input, one from generated samples)
    disc_real = discriminator(real_image_input)
    disc_fake = discriminator(gen_sample, reuse=True)
    disc_concat = tf.concat([disc_real, disc_fake], axis=0)
    
    # Build the stacked generator/discriminator
    stacked_gan = discriminator(gen_sample, reuse=True)
    
    # Build Targets (real or fake images)
    disc_target = tf.placeholder(tf.int32, shape=[None])
    gen_target = tf.placeholder(tf.int32, shape=[None])
    
    # Build Loss
    disc_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=disc_concat, labels=disc_target))
    gen_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
        logits=stacked_gan, labels=gen_target))
    
    # Build Optimizers
    optimizer_gen = tf.train.AdamOptimizer(learning_rate=0.001)
    optimizer_disc = tf.train.AdamOptimizer(learning_rate=0.001)
    
    # Training Variables for each optimizer
    # By default in TensorFlow, all variables are updated by each optimizer, so we
    # need to precise for each one of them the specific variables to update.
    # Generator Network Variables
    gen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Generator')
    # Discriminator Network Variables
    disc_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Discriminator')
    
    # Create training operations
    train_gen = optimizer_gen.minimize(gen_loss, var_list=gen_vars)
    train_disc = optimizer_disc.minimize(disc_loss, var_list=disc_vars)
    
    # Initialize the variables (i.e. assign their default value)
    init = tf.global_variables_initializer()
    
    # Start training
    with tf.Session() as sess:
    
        # Run the initializer
        sess.run(init)
    
        for i in range(1, num_steps+1):
    
            # Prepare Input Data
            # Get the next batch of MNIST data (only images are needed, not labels)
            batch_x, _ = mnist.train.next_batch(batch_size)
            batch_x = np.reshape(batch_x, newshape=[-1, 28, 28, 1])
            # Generate noise to feed to the generator
            z = np.random.uniform(-1., 1., size=[batch_size, noise_dim])
    
            # Prepare Targets (Real image: 1, Fake image: 0)
            # The first half of data fed to the generator are real images,
            # the other half are fake images (coming from the generator).
            batch_disc_y = np.concatenate(
                [np.ones([batch_size]), np.zeros([batch_size])], axis=0)
            # Generator tries to fool the discriminator, thus targets are 1.
            batch_gen_y = np.ones([batch_size])
    
            # Training
            feed_dict = {real_image_input: batch_x, noise_input: z,
                         disc_target: batch_disc_y, gen_target: batch_gen_y}
            _, _, gl, dl = sess.run([train_gen, train_disc, gen_loss, disc_loss],
                                    feed_dict=feed_dict)
            if i % 100 == 0 or i == 1:
                print('Step %i: Generator Loss: %f, Discriminator Loss: %f' % (i, gl, dl))
    
        # Generate images from noise, using the generator network.
        f, a = plt.subplots(4, 10, figsize=(10, 4))
        for i in range(10):
            # Noise input.
            z = np.random.uniform(-1., 1., size=[4, noise_dim])
            g = sess.run(gen_sample, feed_dict={noise_input: z})
            for j in range(4):
                # Generate image from noise. Extend to 3 channels for matplot figure.
                img = np.reshape(np.repeat(g[j][:, :, np.newaxis], 3, axis=2),
                                 newshape=(28, 28, 3))
                a[j][i].imshow(img)
    
        f.show()
        plt.draw()
        plt.waitforbuttonpress()
    
    
    展开全文
  • DCGAN:Deep Convolutional Generative Adversarial Networks模型在Keras当中的实现 目录 所需环境 tensorflow-gpu==1.13.1 keras==2.1.5 文件下载 为了验证模型的有效性,我使用了花的例子进行了训练。 训练好的...
  • DCGAN DCGAN是深度卷积生成对抗网络。 DCGAN由彼此相对的两个神经网络组成。 生成器神经网络学习创建看起来真实的图像,而鉴别器学习识别伪造的图像。 随着时间的流逝,图像开始越来越像训练输入。 图像以随机噪声...
  • 基于tensorflow实现的DCGAN,自动生成动漫头像(内有头像数据爬取和裁剪函数),也可以生成任意数据集(将数据文件夹放于data文件夹下即可,如data/faces/*.jpg)
  • 用pytorch实现的DCGAN,代码结构清晰,附有说明文件和数据集下载地址。并有结果图片。下载后请先查看 readme.md文件
  • Pytorch使使用用MNIST数数据据集集实实现现基基础础GAN和和DCGAN详详解解 今天小编就为大家分享一篇Pytorch使用MNIST数据集实现基础GAN和DCGAN详解具有很好的参考价值希望对 大家有所帮 一起跟随小编过来看看吧 原始...
  • DCGAN(深度卷积GAN) WGAN-CP(使用重量修剪的Wasserstein GAN) WGAN-GP(使用梯度罚分的Wasserstein GAN) 依存关系 突出的软件包是: 麻木 scikit学习 张量流2.0 pytorch 1.6.0 火炬视觉0.7.0 要快速轻松...
  • DCGAN-源码

    2021-02-25 02:54:36
    DCGAN
  • 派托克-DCGAN DCGAN的Pytorch实现。 更改DB变量以更改数据集。 要使用保存的模型生成图像,请将LOAD_MODEL设置为True,将EPOCHS设置为0。 生成的样本 LSUN教堂 西莉亚 MNIST
  • 用C ++ / Cuda实现DCGAN的CPU / GPU实现 使用celebA数据集训练的DCGAN的实现,以生成人脸。 要求 对于CUDA版本,您需要: GCC 8.0或更高版本 CUDA 9.2 至少2.4GB内存(GPU) 建造 您需要下载celebA数据集。 (下...
  • DCGAN:TensorFlow的DCGAN教程的实现

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 5,583
精华内容 2,233
关键字:

dcgan