Â鶹´«Ã½AV

Submitted by admin on Wed, 10/23/2024 - 01:52

Generative adversarial networks (GANs), modeled as a zero-sum game between a generator (G) and a discriminator (D), allow generating synthetic data with formal guarantees. Noting that D is a classifier, we begin by reformulating the GAN value function using class probability estimation (CPE) losses. We prove a two-way correspondence between CPE loss GANs and f-GANs which minimize f-divergences. We also show that all symmetric f-divergences are equivalent in convergence. In the finite sample and model capacity setting, we define and obtain bounds on estimation and generalization errors. We specialize these results to $\alpha $ -GANs, defined using $\alpha $ -loss, a tunable CPE loss family parametrized by $\alpha \in (0,\infty $ ]. We next introduce a class of dual-objective GANs to address training instabilities of GANs by modeling each player’s objective using $\alpha $ -loss to obtain $(\alpha _{D},\alpha _{G})$ -GANs. We show that the resulting non-zero sum game simplifies to minimizing an f-divergence under appropriate conditions on $(\alpha _{D},\alpha _{G})$ . Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error. Finally, we highlight the value of tuning $(\alpha _{D},\alpha _{G})$ in alleviating training instabilities for the synthetic 2D Gaussian mixture ring as well as the large publicly available Celeb-A and LSUN Classroom image datasets.

Monica Welfert
Gowtham R. Kurri
Kyle Otstot
Lalitha Sankar