5 CONCLUSIONS
In this paper, we proposed a DLSGAN, a method for
training a generator that does not lose the information
of the latent random variable, and an encoder that
inverts the generator. Dynamic latent scale GAN
dynamically adjusts the scale of the i.i.d. latent
random variable to have the optimal entropy to
express the data random variable. This ensures that
the generator does not lose the information of the
latent random variable so that the encoder can
converge to invert the generator with maximum
likelihood estimation. The encoder of DLSGAN
showed much better performance than without
dynamic latent scale.
REFERENCES
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Bengio,
Y. (2014). Generative Adversarial Nets. In Advances in
Neural Information Processing Systems 27.
Xia, W., Zhang, Y., Yang, Y., Xue, J. H., Zhou, B., and
Yang, M. H. (2021). GAN Inversion: A Survey. arXiv
preprint arXiv:2101.05278.
Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky,
M., Mastropietro, O., and Courville, A. (2017).
Adversarially Learned Inference. In International
Conference on Learning Representations 2017.
Donahue, J., Krähenbühl, P., and Darrell, T. (2017).
Adversarial Feature Learning. In International
Conference on Learning Representations 2017.
Donahue, J., and Simonyan, K. (2019). Large Scale
Adversarial Representation Learning. In International
Conference on Learning Representations 2019.
Mirza, M., Osindero, S. (2014). Conditional Generative
Adversarial Nets. arXiv preprint arXiv:1411.1784.
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever,
I., and Abbeel, P. (2016). InfoGAN: Interpretable
Representation Learning by Information Maximizing
Generative Adversarial Nets. In Advances in Neural
Information Processing Systems 29.
Perarnau, G., Weijer, J. V. D., Raducanu, B., and Álvarez,
J. M. (2016). Invertible Conditional GANs for image
editing. arXiv preprint arXiv:1611.06355.
Zhuang, P., Koyejo, O. O., and Schwing, A. (2021). Enjoy
Your Editing: Controllable GANs for Image Editing via
Latent Space Navigation. In International Conference
on Learning Representations 2021.
Zhu, J., Shen, Y., Zhao, D., and Zhou, B. (2020). In-domain
GAN Inversion for Real Image Editing. In Proceedings
of European Conference on Computer Vision.
Ulyanov, D., Vedaldi, A., and Lempitsky, V. (2018). It
Takes (Only) Two: Adversarial Generator-Encoder
Networks. Proceedings of the AAAI Conference on
Artificial Intelligence.
Kim, H., Choi, Y., Kim, J., Yoo, S., and Uh, Y. (2021).
Exploiting Spatial Dimensions of Latent in GAN for
Real-Time Image Editing. Conference on Computer
Vision and Pattern Recognition, pages 852-861.
Guan, S., Tai, Y., Ni, B., Zhu, F., Huang, F., Yang, X.
(2020). Collaborative Learning for Faster StyleGAN
Embedding. arXiv preprint arXiv:2007.01758.
Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar,
Y., Shapiro, S., and Cohen-Or, D. (2021). Encoding in
Style: A StyleGAN Encoder for Image-to-Image
Translation. Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, pages
2287-2296.
Karras T., Laine, S., and Aila, T. (2019). A Style-Based
Generator Architecture for Generative Adversarial
Networks. IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 4396-4405.
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J.,
and Aila, T. (2020). Analyzing and Improving the
Image Quality of StyleGAN. IEEE/CVF Conference on
Computer Vision and Pattern Recognition, pages 8107-
8116.
Lucic, M., Kurach, K., Michalski, M., Gelly, S., and
Bousquet, O. (2018). Are GANs Created Equal? A
Large-Scale Study. Advances in Neural Information
Processing Systems 31.
Liu, Z., Luo, P., Wang, X., and Tang, X. (2015). Deep
Learning Face Attributes in the Wild. Proceedings of
International Conference on Computer Vision, pages
3730-3738.
Mescheder, M., Nowozin, S., and Geiger, A. (2018). Which
Training Methods for GANs do actually Converge?
Proceedings of the 35th International Conference on
Machine Learning, pages 3481-3490.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. (2017). GANs Trained by a Two Time-
Scale Update Rule Converge to a Local Nash
Equilibrium. Proceedings of the 31st International
Conference on Neural Information Processing System.
ICPRAM 2022 - 11th International Conference on Pattern Recognition Applications and Methods
228