Abstract:
The Generative Adversarial Nets (GAN) has achieved remarkable success in image translation and multi-domain image conversion. However, most of the existing GANs used for multi-domain image conversion use multiple generators G and discriminators D, resulting in an excessive amount of network training parameters and insufficient utilization of data sets. In view of the above problems, we propose a GAN model based on feature vector transformation, which based on StarGAN and multi-modal unsupervised image conversion method. Firstly, the model encodes the source image into a form of a content vector and a feature vector. Then, the model converts the extracted feature vector from the source domain to the target domain while the content vector remains unchanged. Finally, the image reconstruction is completed. This model solves the above problems effectively by using only one pair of generator G and discriminator D. Compared with the existing models, this model is not only suitable for multi-domain image conversion, but also for generating the specified image from noise. The experiments on the CelebA dataset show that the proposed model performs better in terms of multi-domain face attribute conversion than existing models.