Second, we chose to focus only on the use of an
AAEGAN architecture to generate our images, we
aim at comparing these results with other types of
GANs such as CycleGAN or PixtoPix using U-Net
in its generator (Yi et al., 2019). Then, we have to
mention the resembling of original images in the t-
SNE right corner. It appears generated images could
not really generate a similar background such as the
lightning gradient of some bright-field acquisition in
white light. To resolve this issue, we aim at studying
the effect of a similar bright-field background injec-
tion during the generative process.
Finally, given the input dataset containing phys-
iological and pathological models of CO, it would
be interesting to investigate the generation of specific
pathological content in future studies.
5 CONCLUSION
This study answer to the first emerging issue in the
cerebral organoid field highlighted in (Br
´
emond Mar-
tin et al., 2021) i.e the lack of datasets. These
first results show that small databases augmentation
of cerebral organoids bright-field images is possi-
ble using GANs. Particularly the AAEGAN Per-
ceptual Wasserstein loss optimisation generates the
most qualitative content, remains similar to the orig-
inal dataset and images it generates are useful to im-
prove a segmentation task. However it remains to
discover what kind of information other loss opti-
mizations with coherent diversity to the initial dataset
could bring during the generative process. This data
generation strategy will be valuable to develop char-
acterization methods on CO by enabling large statisti-
cal study, but also to develop deep-based approaches
for classification and characterization of the various
morphologies. Such characterization could help to
better understand the growing process once in ade-
quate cultures and help to use cerebral organoids as
models for neuropathological disease or for testing
therapeutics.
REFERENCES
Albanese, A., Swaney, J. M., Yun, D. H., Evans, N. B.,
Antonucci, J. M., Velasco, S., Sohn, C. H., Arlotta,
P., Gehrke, L., and Chung, K. (2020). Multiscale 3D
phenotyping of human cerebral organoids. Scientific
Reports, 10(1):21487.
Br
´
emond Martin, C., Simon Chane, C., Clouchoux, C., and
Histace, A. (2021). Recent Trends and Perspectives in
Cerebral Organoids Imaging and Analysis. Frontiers
in Neuroscience, 15:629067.
El Jurdi, R., Petitjean, C., Honeine, P., Cheplygina, V.,
and Abdallah, F. (2021). High-level prior-based loss
functions for medical image segmentation: A sur-
vey. Computer Vision and Image Understanding,
210:103248.
Gomez-Giro, G., Arias-Fuenzalida, J., Jarazo, J.,
Zeuschner, D., Ali, M., Possemis, N., Bolognin,
S., Halder, R., J
¨
ager, C., Kuper, W. F. E., van Hasselt,
P. M., Zaehres, H., del Sol, A., van der Putten,
H., Sch
¨
oler, H. R., and Schwamborn, J. C. (2019).
Synapse alterations precede neuronal damage and
storage pathology in a human cerebral organoid
model of CLN3-juvenile neuronal ceroid lipofus-
cinosis. Acta Neuropathologica Communications,
7(1).
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative Adversarial Networks. In
Proceedings of NIPS, page 2672–2680.
Hinton, G. and Roweis, S. (2003). Stochastic neighbor em-
bedding. Advances in neural information processing
systems, 0(no):857–864.
Kassis, T., Hernandez-Gordillo, V., Langer, R., and Griffith,
L. G. (2019). OrgaQuant: Human Intestinal Organoid
Localization and Quantification Using Deep Convolu-
tional Neural Networks. Scientific Reports, 9(1).
Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., and
Matas, J. (2018). DeblurGAN: Blind Motion Deblur-
ring Using Conditional Adversarial Networks. Pro-
ceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 8183–8192.
Lan, L., You, L., Zhang, Z., Fan, Z., Zhao, W., Zeng, N.,
Chen, Y., and Zhou, X. (2020). Generative Adversar-
ial Networks and Its Applications in Biomedical In-
formatics. Frontiers in Public Health, 8.
Lv, J., Zhu, J., and Yang, G. (2021). Which GAN? A
comparative study of generative adversarial network-
based fast MRI reconstruction. Philosophical Trans-
actions of the Royal Society A: Mathematical, Physi-
cal and Engineering Sciences, 379(2200):20200203.
Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., and
Frey, B. (2016). Adversarial Autoencoders. Interna-
tional Conference on Learning Representations.
Mao, X., Li, Q., Xie, H., Lau, R. Y. K., Wang, Z., and Smol-
ley, S. P. (2017). Least Squares Generative Adver-
sarial Networks. IEEE International Conference on
Computer Vision (ICCV), pages 2813–2821.
Ronneberger, O., Fischer, P., and Brox, T. (2015). U-
Net: Convolutional Networks for Biomedical Im-
age Segmentation. arXiv:1505.04597 [cs]. arXiv:
1505.04597.
van der Maaten, L. and Hinton, G. (2008). Visualizing data
using tsne. Journal of Machine Learning Research,
9(1):2579–2605.
Wargnier-Dauchelle, V., Simon-Chane, C., and Histace, A.
(2019). Retinal Blood Vessels Segmentation: Im-
proving State-of-the-Art Deep Methods. In Computer
Analysis of Images and Patterns, volume 1089, pages
5–16. -, Cham.
Yi, X., Walia, E., and Babyn, P. (2019). Generative Adver-
sarial Network in Medical Imaging: A Review. Medi-
cal Image Analysis, 58:101552.
VISAPP 2022 - 17th International Conference on Computer Vision Theory and Applications
314